venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title On Representation Learning in the First Layer of Deep CNNs and the Dynamics of Gradient Descent Abstract It has previously been reported that the representation that is learned in the first layer of deep CNNs is very different from the initial representation and highly consistent across initialization and architecture. In this work, we quantify this consistency by considering the set of filters as a filter bank and measuring its energy distribution. We find that the energy distribution is remarkably consistent and try to determine the source of this consistency. We show that this consistency cannot be explained by the fact that CNNs learn a representation that is useful for recognition and that CNNs trained with fixed, random filters in the first layer yield comparable recognition performance to full learning. We then show that similar behavior occurs in simple, linear CNNs and obtain an analytical characterization of the energy profile of linear CNNs trained with gradient descent. Our analysis shows that the energy profile is determined by two factors (1) the correlation of the average patch and the class label and (2) an implicit bias given the dynamics of gradient descent. Finally, we show that in commonly used image recognition datasets the correlation between the average patch and the class label is very low and it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 1 INTRODUCTION The remarkable success of Convolutional Neural Networks (CNNs) on a wide variety of image recognition tasks is often attributed to the fact that they learn a good representation of images. Support for this view comes from the fact that very different CNNs tend to learn similar representations and that features of CNNs that are trained for one task are often useful in very different tasks (Yosinski et al., 2014; Doimo et al., 2020; Gidaris et al., 2018). A natural starting point for investigating representation learning in deep CNNs is the very first layer. Studying this representation is somewhat easier than studying more general representation learning since each neuron can be characterized by a single linear filter which can be easily visualized as an image. Figure 1 shows examples of visualizations of the learned filters: unlike the initial filters which are random and devoid of structure, the trained filters resemble Gabor filters (Krizhevsky et al., 2012) and are visually similar for different trained networks. In addition to the qualitative similarity of filters that can be seen in figure 1, there have also been some reports that the filters are quantitatively similar. For example, Li et al. (2015) showed that one can often find a good match for filters learned by one CNN in the set of filters learned by another CNN. In this work we introduce a new measure for qualitatively measuring consistency of representations in the very first layer of a CNN. Using this measure, we show a remarkably high degree of consistency (correlation coefficient close to 1) between the representations that are learned by different CNNs, regardless of initializations, architectures and training sets. The fact that these filters are so different from the initialization is interesting in the context of the theory of deep networks which indicates that under certain conditions they can be trained in a ”lazy” regime (Chizat et al., 2019) - the representations in all intermediate layers hardly differing from their initialization and only the last output layer has weights that differ from initialization. Figure 1 clearly shows that ”lazy training” does not occur in the first layer of deep CNNs and that consistent representation learning occurs instead. A natural explanation for the learning of consistent filters in the first layer is that these filters are optimal in some sense for solving the recognition task. Indeed, Gabor filters and similar oriented filters were often used as a representation of images in the days of ”handcrafted” features for computer vision (Dalal & Triggs, 2005). Similarly, under this explanation, the networks have simply learned that in order to minimize the training loss the first layer of deep CNNs must have filters that resemble Gabors. In this paper we present empirical and theoretical results that are inconsistent with this explanation. We show that CNNs with commonly used architectures can be trained with fixed, random filters in the first layer and still yield comparable performance to full learning. We then show that consistent representation learning in the first layer also occurs in simple, linear CNNs and prove that for these CNNs the dynamics of gradient descent learning together with the statistics of natural image patches introduce an implicit bias towards certain filter distributions. We then show that in real-world CNNs trained on commonly used datasets, a highly consistent representation is learned in the first layer when the true labels are replaced with random labels and therefore that it is the implicit bias that best explains the consistency of representations observed in real-world CNNs. 2 QUANTIFYING CONSISTENCY USING ENERGY PROFILES The visual similarity of the filters that are learned in the first layer of CNNs (figure 1) is easy to see, but we wish to quantify the similarity of representations and go beyond the qualitative similarity. Recent works (Kornblith et al., 2019; Nguyen et al., 2021) suggest comparing two representations based on the distance between the distribution over patches induced by the two representations. But estimating this distance in high dimensions is nontrivial and two very different networks might give similar distributions over patches when the input distribution is highly skewed Ding et al. (2021). In this paper we propose a new method which avoids these shortcomings and is especially relevant for the first layer of a CNN, in which the representation is a linear function of the input patch. Given two patches x1, x2 and a linear transformation A whose rows are the filters, the squared distance between the transformed patches is ∥Ax1 −Ax2∥2 or alternatively (x1 − x2)TATA(x1 − x2). Thus a natural way to understand how distances are transformed when going from x1 to Ax1 is to look at the eigendecomposition of ATA: the ith eigenvalue of ATA measures how much distances in the direction of the ith eigenvector are increased or decreased by the transformation. The eigenvectors of ATA are simply the principal components of the filters, and if we assume translation invariance of the filters, they will have the same principal components as those of natural image patches: namely sines and cosines of different spatial frequencies (Aapo Hyvärinen & Hoyer, 2009). Thus the transformation of similarities is mostly driven by the eigenvalues of ATA and we focus on these to define the consistency of learned filters. Denote p1, ..., pk the PCA components computed from the training images’ patches and A the weights of the first layer of some model trained on these images (where each row of A, ATj is a filter). We define the energy w.r.t each component pi: ei = ∥Api∥ = √∑ j (ATj pi) 2 (1) The energy profile of a set of filters is simply the vector e = (e1..ek) and we measure consistency of two different sets of filters by measuring the correlation coefficient between their energy profiles. Note that this consistency measure is invariant to a rescaling of the filters, to a permutation of the filters and to any orthogonal transformation of the filters. This way of comparing linear representation is equivalent to considering the set of filters as a filter bank and measuring the sensitivity of the filter bank to different spatial frequencies. Figures figs. 2a to 2c show that different models trained with gradient descent are remarkably consistent using our proposed measure. Regardless of architecture or the particular dataset that they were trained on, different CNNs have very similar energy profiles that are less sensitive to very high spatial frequencies and very low ones and the peak sensitivity is for intermediate spatial frequencies (qualitatively similar to the sensitivity pattern of the human visual system which is also sensitive to intermediate spatial frequencies, as shown in figure 2d). Table 1 quantifies this similarity. The correlation coefficient between energy profiles with different random initializations and architecture is remarkably high (over 0.98 in many cases) and the correlation between the learned profiles and the random initialization is close to zero. An expansive set of experiments on various models and datasets can be found in B.2. Thus the use of our new measure allows us to quantitatively show that deep CNNs trained with gradient descent using standard parameters do not exhibit ”lazy” training in the first layer and that highly consistent representation learning takes place. We now ask: what determines this consistency? 3 IS CONSISTENCY DUE TO CNNS LEARNING SEMANTICALLY MEANINGFUL FEATURES? A natural explanation for the remarkable consistency of the learned representation in the first layer is that CNNs learn a representation that is good for object recognition. In particular, high spatial frequencies are often noisy while very low spatial frequencies are often influenced by illumination conditions. Thus learning a representation that is mostly sensitive to intermediate spatial frequencies makes sense if the goal is to recognize objects. Similarly, human vision is also mostly sensitive to intermediate spatial frequencies (Owsley, 2003) (see figure 2d), presumably for the same reasons. In order to test this hypothesis we asked if training modern CNNs while freezing the first layer will result in a decrease in performance. If indeed, Gabors of intermediate frequencies are optimal for object recognition, we would expect performance to suffer if we froze the first layer to have random filters with equal energy in all frequencies. Figure 3 shows that there is almost no change in the performance of modern CNNs when the weights in the first layer are frozen. This is true when measuring training accuracy, training loss or validation accuracy and validation loss. Apparently the networks learn to compensate for the random filters in the first layer by learning different weights in the subsequent layers. In other words, if we were to train modern CNNs using some discrete search over weights (e.g. genetic programming) to minimize the training loss, there is no reason to expect that consistent Gabors of intermediate frequencies would be found. Equally good training loss can be obtained with random filters in the first layer. To summarize, while quantitatively highly consistent representations are learned in the first layer of commonly used CNNs, this cannot be explained by the networks minimization of the training loss, This motivates us to analyze representation learning in much simpler CNNs. 4 SIMPLE, LINEAR CNN In order to understand the consistency that we observe among energy profiles in the first layer of trained CNNs, we turn to analyzing a very simple model: a linear CNN with one hidden layer. Specifically, in this simple model, the first layer includes convolutions with W different filters and the output is given by a global average pool of the filters over all locations. This model is clearly very different from real-world CNNs but we use it because it allows closed form-analysis and also exhibits some of the same consistency behaviors that we found in real-world CNNs. Specifically we have found that: • The energy profiles of simple, linear CNNs are highly consistent across initializations and widths and are very different from the energy profiles of the initial conditions. • The energy profile of simple, linear CNNs trained with gradient descent is different from the energy profile of the filters that globally optimize the loss. • The energy profile of simple, linear CNNs are highly consistent when the true labels are replaced with random labels. These properties are all displayed in figure 4 where we show results of training a linear model on binary tasks from CIFAR10. In all cases, the energy profile that is learned with true labels (red) is different from the initial conditions and is sensitive mostly to intermediate frequencies while the optimal energy profile (shown in blue) is quite different and shows a high sensitivity to high spatial frequencies. Training these networks with random labels give an energy profile (in green) that is similar to that of the true labels. The following theorems show the same behaviors analytically. Theorem 4.1. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss. The energy profile for the filters that globally minimize the loss is given by µ2opt with: µopt = (K TK)−1KT y Where K is the matrix of average patches in each image (in the PCA basis) and y is the vector of labels. Proof. This simply follows from the model being equivalent to linear model in the average image patch (lemmas A.1 and A.2), and solving the suitable linear regression problem. Theorem 4.2. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . The squared energy profile for the filters at any iteration is given by µ2GD + σ 2 with: µGD = (K TK + Λ)−1KT y With the same K and y as in theorem 4.1, and Λ is a spectral regularizer that depends on the learning rate, number of iterations and KTK. Proof. The full proof is given in the appendix and uses a similar technique that was used to derive spectral biases in gradient descent learning of fully connected networks LeCun et al. (1991). See A.3 for a full proof. Theorem 4.3. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the label y is uncorrelated with the average patch in each image then the squared energy profile for the filters at any iteration is given by µ20 + σ 2 with: µ0 ∝ (KTK + Λ)−1KT 1 (2) Where K is the matrix of average patches in each image and 1 is a vector of all ones and Λ is a spectral regularizer that depends on the learning rate and number of iterations. Proof. This follows from theorem 4.2 and the fact that the quantity Ky is proportional to the empirical expectation of the product between each average image patch and it’s label. When the average patch is uncorrelated with the label, this expectation is the product of the expected average patch and expected label, so it is proportional to KT 1 which is the expectation of the average patch. Corollary 4.4. Consider a linear CNN with arbitrary width that solves a binary classification problem with L2 loss and is trained with gradient descent starting with zero mean filters and covariance σ2I . If the average patch is the same in the two classes, then training with the true labels and training with random labels will give the same energy profile given by: µ0 ∝ (KTK + Λ)−1KT 1 Proof. Follows from KT y being a sum of all average image patches with yi = 1. If both average class patches are equal, then in expectation over a random (binary) y, the sums of all average patches will be equal. Full description of µ0 can be found in theorem A.5. These theorems show that in the case of simple, linear CNNs different networks (initial conditions, widths) will learn the same representation in the first layer but despite the fact that the loss is convex, the learned representation will not be the representation that globally optimizes the training loss with any finite number of training steps. Rather, the use of gradient descent will introduce an implicit bias that favors certain energy profiles that depend on the number of training steps and the learning rate (with the form of the bias given explicitly by equation 2). This implicit bias causes the learned profiles to be highly consistent yet very different from the optimal one. 5 IMPLICIT BIAS IN NONLINEAR CNNS The theoretical analysis of linear CNNs shows that if the true labels are uncorrelated with the average patch in an image, the learned energy profile will converge to a consistent profile that is determined by the dynamics of gradient descent and the statistics of the input patches. We therefore ask: is the consistent energy profile that we find in real-world CNNs also due to the dynamics of SGD? According to our analysis, the implicit bias is strongest when the label is uncorrelated with the average patch. We measured this correlation in commonly used image classification datasets by computing the correlation coefficient between the average PCA value in an image and its label for different tasks (a binary classification of one class versus the rest). The results are shown in figure 5. For almost all tasks and coefficients, the correlation coefficient is close to zero. Given the small amount of correlation, we would expect a similar energy profile when we train with random labels and true labels. Maennel et al. (2020) have already shown that when CNNs are trained with random labels, the representations that are learned in the first layers are still useful for other tasks. Here, we ask a more quantitative question: are the energy profiles the same? As shown in figures 6 and table 2 the answer is clearly yes. Correlations above 0.9 are consistently observed even when the true labels are replaced with random labels and the representations that are learned are still mostly sensitive to intermediate spatial frequencies. This is true both when trained on multiclass recognition problems (e.g. CIFAR10, CIFAR100, CelebA) and when trained on smaller, 2-class problems for which we’ve already seen consistency of linear CNNs (fig. 4). As an additional test of the hypothesis that the energy profiles we see in real-world CNNs are mostly due to the implicit bias, we created new datasets in which we artificially created strong correlations between the label and particular PCA components and trained VGG on them. The image labels were determined by the average patch projection onto some PCA component, such that the 5,000 images with the largest magnitude of projection were labeled with 1 and so on. As can be seen in fig. 7, once changing the average patch of each class manually the correlation between the true and random profiles decreases from the original 0.9 ± 0.02 to as low as −0.24 ± 0.02, depending on the component changed and the learned energy profile no longer resemble the human sensitivity function. 6 RELATED WORK The fact that different CNNs tend to learn similar filters in the first layer has been reported previously (e.g. Yosinski et al. (2014); Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019)), and follows from a line of work of visualizing representations in deep CNNs Zeiler & Fergus (2013); Girshick et al. (2013). Our work extends this finding by showing that the overall representation in the first layer is not only qualitatively but also is quantitatively similar - different CNNs not only learn to recognize spatial frequencies in their first layer but also the same distribution of frequencies. This consistency is then expanded to networks trained with true and random labels. Prior works have also studied the ability of neural networks to overfit random labels (Arpit et al., 2017), and use representations learned in this regime for transfer learning. Maennel et al. (2020) hypothesised that the ability of networks trained on random labels to transfer to new tasks is due to the fact that under certain conditions, the first layers of networks trained with random labels have aligned covariances with input image patches. We expand on this hypothesis by showing that networks trained with true labels, for which there is no alignment guarantee, display the same energy profile as networks trained with random labels. We show this quantitatively for VGG11 and ResNet and theoretically for linear CNNs with a single hidden layer. The fact that gradient descent training biases towards certain solutions has been known for many years, and proven mainly for linear predictors and separable data. Studies on linear networks (Soudry et al., 2018) and linear CNNs (Gunasekar et al., 2018) found that under certain conditions, gradient descent causes the effective linear predictor to be biased towards sparsity (in Fourier space in the case of CNNs) or minimal norm or max-margin (Chizat & Bach, 2020). Similar works have also shown that deep non-linear networks are biased towards learning lower frequencies first (Rahaman et al., 2019). Our work follows this line, focusing on the features learned in the first layer as a result of this bias, and that of the input image statistics. In its theoretical part, our analysis methods follow closely on the methods used by (LeCun et al., 1991; Hacohen & Weinshall, 2022) which analyze the dynamics of weights in a fully connected network during learning with gradient descent. We use a similar technique but our focus is on the first layer of a CNN. Additionally, we rely on linear networks to gain insight into the behavior of nonlinear networks, following previous works (Hacohen & Weinshall, 2022; ?; Gissin et al., 2019). In the same manner, we support our simplified theoretical claims by quantitatively showing consistency of the theory in real-world CNNs such as VGG. As a result of the consistency of Gabors being learned in the first layers of CNNs such as ResNet, GoogLeNet and DenseNet, each a state-of-the-art at its time - some lines of work attempted building CNNs with learnable Gabors Sarwar et al. (2017); Luan et al. (2017); Alekseev & Bobe (2019) in the first layer. Nevertheless, these failed to reach the high level of performance on benchmark tasks such as the ”vanilla” architectures. Our work expands on this contradiction by showing that not only do the networks consistently learn Gabor filters but also a specific distribution of their frequencies. The distribution mentioned above, was portrayed in our work using the energy profile of the first layers. This measure follows a line of many works in the field of measuring and visualizing similarities between representations Csiszárik et al. (2021); Kornblith et al. (2019); Nguyen et al. (2021); Li et al. (2015); Doimo et al. (2020), varying between comparing the output of transformations induced by the neurons or the neurons themselves. The energy profile is yet another method in this line, while allowing for semantically meaningful (as PCA components correspond to spatial frequencies) visualization of the representation, without any need for dimensionality reduction. 7 DISCUSSION The dramatic success of CNNs in computer vision has lead to increased interest in the representations that they learn. In this paper we have focused on the representation that CNNs learn in the very first layer and presented a high degree of quantitative consistency between the energy profiles learned by different networks using different initializations and architectures. We have examined the hypothesis that this consistency is due to networks learning a representation that is useful for object recognition and presented results that are inconsistent with that hypothesis. By analysing a simple, linear CNNs we have shown that such networks will provably converge to a consistent energy profile under many conditions, but this profile may have nothing to do with the labels and is instead determined by an implicit bias due to the dynamics of gradient descent and the statistics of the input patches. APPENDIX A LINEAR CONVOLUTIONAL NETWORKS A.1 PROOFS OF THEOREMS ON LINEAR CONVOLUTIONAL NETWORKS We first begin with a basic claim on the model composed of a hidden convolutional layer and followed by a global average pool. Lemma A.1. A linear CNN of depth 1 (followed by a global average pool) trained with MSE loss, is equivalent to linear regression on the average image patch. Proof. Let {Xi}Ni=1 with Xi ∈ Rc×w×h be the set of training images, {yi} N i=1 their binary labels (yi ∈ {0, 1}) and the weights of the first layer be W ∈ Rk×c×d×d - k filters of dimension d × d. Denote the output dimensions of the convolution as w′, h′, then: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ||( 1 k · w′ · h′ ∑ k,w′,h′ Xi ∗W )− yi||2 Summing over the output dimensions is equivalent to summing over a dot product of the patches with a single filter, therefore denoting Ki ∈ Rw ′·h′×c·d2 as the patch matrix of the i’th image and W̃ ∈ Rc·d2×k the reshaped weights matrix: L ( W ; {(Xi, yi)}Ni=1 ) = 1 N N∑ i=1 1 2 ∥∥∥∥∥ ( 1 w′ · h′ 1 )T KiW̃ ( 1 k 1 ) − yi ∥∥∥∥∥ 2 And noting that ( 1 w′·h′1 )T Ki is the average patch of the i’th image and W̃ ( 1 k1 ) the average filter concludes the proof. Another lemma we’ll use further on claims that during training of the linear CNN model, only the average filter changes while the filter covariance remains as during initialization. Therefore proofs from one filter to multiple filters are easily extendable. Lemma A.2. In a linear CNN of depth 1 followed by a global average pool, of any width, trained with GD and MSE loss, the average filter changes during iterations while the covariance of filters remains as during initialization. Proof. Following the notation of lemma A.1, denote K ∈ RN×c·d2 as the average image patch matrix - the image whose i’th row is the average patch of the image Xi, and the network consists of filters w1....wm, therefore trained with the following loss: L (w1...wm;K, y) = ∥∥∥∥∥ 1m m∑ i=1 Kwi − y ∥∥∥∥∥ 2 (3) = ∥∥∥∥∥K ( 1 m m∑ i=1 wi ) − y ∥∥∥∥∥ 2 = ∥Kw̄ − y∥2 (4) Where w̄ is the average filter. The dynamics of a single filter in this layer: ∂L ∂wj = 1 2m KT ( K ( 1 m m∑ i=1 wi ) − y ) = 1 2m KT (Kw̄ − y) (5) Meaning that the gradients w.r.t to all filters are equal and depend only on the average filter at the current iteration. By recursion we can see that the change in the average filter is as follows, for learning rate η: w̄t = 1 m m∑ i=1 ( wt−1i − η∇L t−1(wi) ) = 1 m m∑ i=1 ( wt−1i − η∇L t−1(w̄) ) (6) = ( 1 m m∑ i=1 wt−1i ) − η∇Lt−1(w̄) = w̄t−1 − η∇Lt−1(w̄) (7) Concluding that the gradients for all filters are equal, and depend only on the average filter. Theorem A.3. Let K be a matrix whose i’th row is the average image patch of the i’th image and y is a vector with the labels of all images, and let K̄ = KU be the same matrix in the PC basis (with U being the PCA eigenvector matrix). The squared energy profile of weights of linear CNN, initialized with random weights sampled zero mean and covariance σ2I and trained with GD, is equal to the following: e2i := 1 M M∑ j=1 ⟨fj , pi⟩2 = w̃2i + σ2 (8) where w̃ = ( K̄T K̄ + Λ )−1 K̄T y is the solution to a regularized regression problem in the PC basis, that regresses the average patch in an image with its label, with Λ = Λ(KTK, t, η) a matrix depending on the eigenvalues of KTK, the iteration of GD and the step size. Proof. It follows from lemma A.2 that during training all filters change by the average filter. We’ll show that a single filter (at iteration t of GD) corresponds to a solution to ridge regression with some matrix Λ = Λ(t, η,KTK) with η being the step size. Opening the recursion of GD updates, and assuming w is initialized at w = 0: wt = wt−1 − ηK̄T ( K̄wt−1 − y ) = wt−1 − ηK̄T K̄wt−1 + ηK̄T y wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y (9) In this coordinate system, K̄T K̄ is a diagonal matrix with the empirical variances σ̂2i on the diagonal if it is centered. If the matrix isn’t centered, then K̄T K̄ = Σ̂ + µ̂µ̂T where Σ̂ is a diagonal matrix with the empirical variances on the diagonal and µ̂i is the empirical mean estimating Ex [⟨x, pi⟩]. This is because in PCA coordinates, K̄ = KU , where U contains the eigenvectors as columns. Since K isn’t centered, K = K0 + 1KTavg with K0 being zero meaned and Kavg being the average row. Therefore, K̄T K̄ = ( K0U + 1K T avgU )T ( K0U + 1K T avgU ) = Σ̂ + µ̂µ̂T , where the phrase KT0 (1K T avg) disappears since K0 has zero mean. Therefore: wt = η t−1∑ j=0 ( I − ηK̄T K̄ )j K̄T y = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y (10) Notice that ( I − η ( Σ̂ + µ̂µ̂T ))j can be decomposed in the following manner using the binomial theorem:( I − η ( Σ̂ + µ̂µ̂T ))j = ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T (11) Putting it back into equation 10: wt = η t−1∑ j=0 ( I − η ( Σ̂ + µ̂µ̂T ))j K̄T y = η t−1∑ j=0 ( I − ηΣ̂ )j + j∑ k=1 ( j k ) (−η)k ∥µ̂∥2(k−1) ( I − ηΣ̂ )j−k µ̂µ̂T K̄T y (12) Looking at the i’th coordinate, with λi being the i’th eigenvalue on the diagonal of Σ̂: wt(i) =(K̄ T y)(i) t−1∑ j=0 (1− ηλi)j +(µ̂µ̂T K̄T y)(i) t−1∑ j=1 1 ∥µ̂∥2 ( j∑ k=1 ( j k ) (−η ∥µ̂∥2)k (I − ηλi)j−k ) (13) After some algebra: wt(i) = 1− ( 1− η(λi + ∥µ̂∥2) )t ∥µ̂∥2 (λi + ∥µ̂∥2) − 1− (1− ηλi) t λi (µ̂µ̂T K̄T y)(i) + ( 1− (1− ηλi)t λi ) (K̄T y)(i) (14) And in matrix notation, define the diagonal matrix A with Aii = 1−(1−ηλi)t λi as the i’th element on the diagonal, and the diagonal matrix B with Bii = 1−(1−η(λi+∥µ̂∥2)) t ∥µ̂∥2(λi+∥µ̂∥2) the i’th element on the diagonal and we get that: wt = (B −A)µ̂µ̂T K̄T y +AK̄T y (15) Solving the following: wt = ( K̄T K̄ + Λ )−1 K̄T y = ( Σ̂ + µ̂µ̂T + Λ )−1 K̄T y (16) we get that: Λ = ( B + (A−B)µ̂µ̂T )−1 − Σ̂− µ̂µ̂T (17) and we got a definition for the regularization matrix. Since the filter covariance stays constant throughout training due to lemma A.2, treating the filters as a random variable initialized with covariance σ2I (in PCA basis) means that their empirical second moment is equal to the sum of the squared mean and variance. Therefore denoting the filters in PCA basis as f̃j , we get that in the ith coordinate: 1 M M∑ j=1 ⟨fj , pi⟩2 = 1 M M∑ j=1 ⟨fj , pi⟩ 2 + 1 M M∑ j=1 ⟨fj , pi⟩ − 1 M M∑ j=1 ⟨fj , pi⟩ 2 = w̃2(i) + σ2 (18) Theorem A.4 (Effect of Labels). Let W tTrue be the weights of the first layer of a linear CNN with a single hidden layer and any width, trained for t steps on a binary classification task with MSE loss and gradient descent, and let WRandom be the weights of the first layer of the same CNN trained with random labels drawn from a Bernoulli distribution. If the average patch of both classes is identical, and the dataset is balanced between them, then at any training iteration: Ey∼Bernoulli( 12 ) [ W tRandom ] = W tTrue (19) Proof. Let K ∈ RN×c·d2 be the average image patch matrix and y ∈ {0, 1}N the image labels. From lemma A.1, training a linear CNN with 1 layer followed by a global average pool is equivalent to solving the following linear regression problem for weights matrix W ∈ Rc·d2×1: L (W ;K, y) = 1 N 1 2 ∥KW − y∥2 Using gradient descent with learning rate η, the update rule for W is: Wt = Wt−1 − η N ( KT (KWt−1y) ) = ( I − η N KTK ) Wt−1 + η N KT y (20) Notice that in expectation, Ey∼Bernoulli( 12 ) [y] = 1 21, therefore Ey∼Bernoulli( 12 ) [ KT y ] is (half) the sum of all average image patches. From our assumption, the average image is equal between classes. Denote this average patch as z, and since K is the average patch matrix z = 2NKy. Combining this observation with the above: Ey∼Bernoulli( 12 ) [ η N KT y ] = η N · 1 2 KT1 = η 1 2N N · z = η N KT y (21) And that concludes the proof. Note that we assumed that the CNN is of width 1, but using lemma A.2 is enough for generalizing to any width. Theorem A.5 (Solution in PCA Basis). Let w̃ = ( K̄T K̄ + Λ )−1 K̃T y be as described in theo- rem A.3, for K̄ the average image patch matrix in the PCA basis and Λ = Λ(K̄T K̄, t, η). Denote µ̂ as the empirical mean projection onto the PCA basis and Σ̂ as the the uncentered data covariance in PCA basis. If the labels are drawn randomly from a Bernoulli distribution, then in expectation, w̃ can be calculated at any iteration t and for any step size η with the following formula: Ey∼Bernoulli( 12 ) [w̃] ∝ ( I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ ) Σ̂′−1µ (22) with Σ̂′ = Σ̂ + Λ. Proof. Following the notation from before, denote K ∈ RN×c·d2 as the average patch matrix, and K̄ as the same matrix in the PCA basis coordinates. From theorem A.3 K̄T K̄ = Σ̂+ µ̂µ̂T . Solving the linear ridge regression problem in this coordinate system as described in theorem A.3: L(w; K̄, y) = 1 2 ∥∥K̄w − y∥∥2 + 1 2 wTΛw ⇒ w̄ = ( K̄T K̄ + Λ )−1 K̄T y (23) In expectation over a random y, as described in theorem A.4: E [y] = 121, therefore E [ K̃T y ] = N 2 µ̂. As mentioned before, K̄T K̄ = Σ̂+ µ̂µ̂T . Define Σ̂′ = Σ̂+Λ a matrix summing the PCA variances and the regularization coefficients. Now using Woodbury matrix identity:( Σ̂′ + µµT )−1 = Σ̂′−1 − Σ̂′−1µ(I + µT Σ̂′−1µ)µT Σ̂′−1 = (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1 and we get that: w̃ ∝ (I − Σ̂ ′−1µµT 1 + µT Σ̂′−1µ )Σ̂′−1µ A.2 CORRELATION FIGURES As mentioned in 4, energy profiles of linear CNNs had much higher similarity when training with true and random labels using SGD, compared to their random initialization and the optimal solution to the corresponding linear regression problem. To complement 4, 3 displays the mean and standard deviation of correlation coefficients between the mentioned energy profiles. Again, it is clear that there is a high similarity in energy profiles when training a linear CNN with SGD on true and random labels. APPENDIX B EXPANDED RESULTS ON SIMILARITY BETWEEN PRETRAINED B.1 ACCURACY OF NETWORKS TRAINED WITH AND WITHOUT A FROZEN FIRST LAYER As shown in 3, networks of different depths converge to the same minimal loss value when trained with and without their first layer. To complement this we present the accuracies of these models below (fig. 9), echoing this result. B.2 COMPARISON OF PRETRAINED CNNS ON CIFAR AND IMAGENET To expand on the similarity between first layers of different architectures, we present correlation plots emphasizing the difference between a random initialization and the learned weights of different networks on different datasets. Presented are figures comparing pretrained models on ImageNet (figs. 10 and 11), CIFAR10 (fig. 12) CIFAR100 (fig. 13), and ResNets trained on different datasets (fig. 14). All models were downloaded through the Pytorch Model Hub. Although it might seem odd that correlation on Imagenet is much higher than on the CIFAR datasets, we believe this is due to resolution - while on the CIFAR datasets correlation is calculated over an energy profile in R27, the Imagenet example contains profiles in R147, making the calculated correlation smoother and less sensitive to noise. This is demonstrated in fig. 15 which presents the correlation between 27 components of the Imagenet profiles. When looking in higher resolution the correlation coefficients between the different models drop and are relatively equal to those between the different models on the CIFAR datasets. B.3 COMPARISON OF VGG WITH DIFFERENT LOSSES Although A.4 and all other theorems are proved on a linear network using MSE loss (as customary in theoretical works on linear networks e.g. (Hacohen & Weinshall, 2022; LeCun et al., 1991)), in practice most CNNs for multi-class classification are trained with crossentropy loss. To test the effect on the energy profile of a real network, we trained VGG with both crossentropy and MSE, and with true and random labels, the results are displayed 16 and correlations in 4. As can be seen in the figure, even in this case the networks’ energy profiles are highly correlated, thus supporting our hypothesis that the main difference between the formula A.5 and the pretrained networks is due to the oversimplification of the linear model, and not for example the loss used in theory vs practice. B.4 FULL FIGURES ON TRUE AND RANDOM LABELS B.5 EXPERIMENTAL DETAILS All models - linear and non linear were trained with SGD and a constant learning rate of 0.1. No preprocessing was applied to the data except when stated otherwise. All models were trained for 150 epochs, with minibatches of size 256. All results are averaged over at least 3 different random seeds. When referring to models ”trained with random labels”, we trained models until they overfit the training data, as both ResNet and VGG can reach 99% train accuracy on CIFAR10 with random labels. All models in the main text were trained ourselves, except those depicted in 1. All pretrained models in 1 and B.2 were downloaded from the Pytorch Model Hub.
1. What is the focus of the paper regarding the first layer representations learned by CNNs? 2. What are the strengths and weaknesses of the proposed approach in analyzing the filters? 3. Do you have any concerns or questions regarding the research scope and its limitations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper looks more closely at the first layer representations learned by CNNs in more detail than has been done in previous studies, where it has been observed that irrespective of the architecture the first layer filters all seem qualitatively similar. The authors propose to compare the similarity of first layer filters by measuring the energy spectra of the filters and comparing (with correlation) spectra between models. This method of quantifying the filters has important advantages in that it is invariant to permutations of filters, orthogonal transformation of filters and to rescaling of filters. It is also equivalent to measuring the filterbank's sensitivity to differing spatial frequencies, thus allowing comparison against measurements of human contrast sensitivity response curves. In linear CNNs the authors find that the shape of the energy profile is determined by the correlation of the average and the class label, and, the implicit bias of gradient descent (in terms of e.g. learning rates, iterations, etc). Interestingly in real CNNs trained on common datasets the average patch to class correlation is very low, suggesting that the first layer filters are driven by the implicit biases in the optimisation process. Strengths And Weaknesses Strengths To the best of my knowledge this is the first paper to look at the first-layer filters of different networks in this way. The measures that have been developed and the analysis are both sound. This concretely takes us a step closer to understanding what is being learned in the first layers of our networks Weaknesses There is an over-reliance on comparing the first layer filters to Gabor wavelets throughout the paper. The original observation that the filter responses (in primates) were "gabor-like" was made a long time ago. No one has ever claimed that the filters are "Gabor filters" though. This aspect of the writing in the paper detracts a bit from the story, and also highlights the lack of consideration of literature on what the first layer filters might be (or are). For example similar filters arise from whitened k-means clustering of patches, gaussian mixture models, sparse auto-encoding, etc (see Coates et al. 2011. An Analysis of Single-Layer Networks in Unsupervised Feature Learning), and sparse coding (see e.g. Olshausen & Field, 1996, Emergence of simple-cell receptive field properties by learning a sparse code for natural images). Following on from the above point, at the end of the paper I'm left wondering "so what". It's nice that the authors have been able to attribute the emergence of particular spectral characteristics of first layer filters to the learning process, but what exactly is the gradient descent process actually making the filters converge to in the limit? I'm aware that this goes beyond the scope of the current paper, but it is in my opinion the question that we should be seeking to answer. In section 3 you make a point about what happens if you have fixed randomised filters in the first layer... what happens in later layers in such cases? does the second layer learn filters with a similar spectra to if you had learned the first layer? (understanding this is important for understanding if the claims around e.g. genetic search are possibly correct) In terms of experiments: what is the effect of different learning algorithms? does Adam quantitatively give different spectra to mini-batch SGD for example? what is the effect of the number of iterations (or how do the spectra change as the networks are trained for longer? In the case of real-world networks what happens if you keep training long beyond the point at which accuracy stops increasing, but whilst loss is still decreasing?) Clarity, Quality, Novelty And Reproducibility The wring and presentation could be improved. For example: Many of the figures have unreadable axis labels Table 1 (particularly the different cols) could be better described in the text and the caption Be more specific about the exact form (e.g. mini-batch stochastic gradient descent) when talking about G.D. State much nearer the start of Section 4 that MSE loss is used within the context of a binary classification problem for this part of the analysis Improve math notation - it would be helpful to consistently bold matrices and vectors (inc. vectors of 1s as is done in parts of the appendices) There are some broken references (e.g. about half way down on p.9)
ICLR
Title Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units Abstract We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron’s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks. N/A We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron’s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks. 1 INTRODUCTION Early artificial neurons utilized binary threshold units (Hopfield, 1982; McCulloch & Pitts, 1943). These hard binary decisions are smoothed with sigmoid activations, enabling a neuron to have a “firing rate” interpretation and to train with backpropagation. But as networks became deeper, training with sigmoid activations proved less effective than the non-smooth, less-probabilistic ReLU (Nair & Hinton, 2010) which makes hard gating decisions based upon an input’s sign. Despite having less of a statistical motivation, the ReLU remains a competitive engineering solution which often enables faster and better convergence than sigmoids. Building on the successes of ReLUs, a recent modification called ELUs (Clevert et al., 2016) allows a ReLU-like nonlinearity to output negative values which sometimes increases training speed. In all, the activation choice has remained a necessary architecture decision for neural networks lest the network be a deep linear classifier. Deep nonlinear classifiers can fit their data so well that network designers are often faced with the choice of including stochastic regularizer like adding noise to hidden layers or applying dropout (Srivastava et al., 2014), and this choice remains separate from the activation function. Some stochastic regularizers can make the network behave like an ensemble of networks, a pseudoensemble (Bachman et al., 2014), and can lead to marked accuracy increases. For example, the stochastic regularizer dropout creates a pseudoensemble by randomly altering some activation decisions through zero multiplication. Nonlinearities and dropout thus determine a neuron’s output together, yet the two innovations have remained distinct. More, neither subsumed the other because popular stochastic regularizers act irrespectively of the input and nonlinearities are aided by such regularizers. In this work, we bridge the gap between stochastic regularizers and nonlinearities. To do this, we consider an adaptive stochastic regularizer that allows for a more probabilistic view of a neuron’s output. With this stochastic regularizer we can train networks without any nonlinearity while matching the performance of activations combined with dropout. This is unlike other stochastic regularizers without any nonlinearity as they merely yield a regularized linear classifier. We also take the expected transformation of this stochastic regularizer to obtain a novel nonlinearity which matches or exceeds models with ReLUs or ELUs across tasks from computer vision, natural language processing, and automatic speech recognition. ∗Work done while the author was at TTIC. Code available at github.com/hendrycks/GELUs 2 GELUS AND THE STOCHASTIC 0-I MAP We create our stochastic regularizer and nonlinearity by combining intuitions from dropout, zoneout, and ReLUs. First note that a ReLU and dropout both yield a neuron’s output with the ReLU deterministically multiplying the input by zero or one and dropout stochastically multiplying by zero. Also, a new RNN regularizer called zoneout stochastically multiplies inputs by one (Krueger et al., 2016). We merge this functionality by multiplying the input by zero or one, but the values of this zero-one mask are stochastically determined while also dependent upon the input. Specifically, we multiply the neuron input x by m ∼ Bernoulli(Φ(x)), where Φ(x) = P (X ≤ x), X ∼ N (0, 1) is the cumulative distribution function of the standard normal distribution. The distribution Bernoulli(Φ(x)) appears in Gaussian Processes for classification (Houlsby et al., 2011) and the neuron’s output is xm giving x or 0. Thus inputs have a higher probability of being “dropped” as x decreases, so the transformation applied to x is stochastic yet depends upon the input. Masking inputs in this fashion retains nondeterminism but maintains dependency upon the input value. A stochastically chosen mask amounts to a stochastic zero or identity transformation of the input, leading us to call the regularizer the SOI map. The SOI Map is much like Adaptive Dropout (Ba & Frey, 2013), but we refer to the regularizer as the SOI Map because adaptive dropout is used in tandem with nonlinearities. In section 4, we show that simply masking linear transformations with the SOI map exceeds the power of linear classifiers and competes with nonlinearities aided by dropout, showing that nonlinearities can be replaced with stochastic regularizers. 3 GELU ReLU ELU The SOI map can be made deterministic should we desire a deterministic decision from a neural network, and this gives rise to our new nonlinearity. The nonlinearity is the expected transformation of the SOI map on an input x, which is Φ(x)×Ix+(1−Φ(x))×0x = xΦ(x). Loosely, this expression states that we scale x by how much greater it is than other inputs. We now make an obvious extension. Since the cumulative distribution function of a Gaussian is computed with the error function, we define the Gaussian Error Linear Unit (GELU) as GELU(x) = xP (X ≤ x), where X ∼ N (µ, σ2). Both µ and σ are possibly parameters to optimize, but throughout this work we simply let µ = 0 and σ = 1. Consequently, we do not introduce any new hyperparameters in the following experiments. In the next section, we show that the GELU exceeds the performance of ReLUs and ELUs across numerous tasks. 3 GELU EXPERIMENTS We evaluate the GELU, ELU, and ReLU on MNIST classification (grayscale images with 10 classes, 60k training examples and 10k test examples), MNIST autoencoding, Tweet part-of-speech tagging (1000 training, 327 validation, and 500 testing tweets), TIMIT frame recognition (3696 training, 1152 validation, and 192 test audio sentences), and CIFAR-10/100 classification (color images with 10/100 classes, 50k training and 10k test examples). We do not evaluate nonlinearities like the LReLU because of its similarity to ReLUs (see Maas et al. (2013) for a description of LReLUs). 3.1 MNIST CLASSIFICATION Let us verify that this nonlinearity competes with previous activation functions by replicating an experiment from Clevert et al. (2016). To this end, we train a fully connected neural network with GELUs (µ = 0, σ = 1), ReLUs, and ELUs (α = 1). Each 8-layer, 128 neuron wide neural network is trained for 50 epochs with a batch size of 128. This experiment differs from those of Clevert et al. in that we use the Adam optimizer (Kingma & Ba, 2015) rather than stochastic gradient descent without momentum, and we also show how well nonlinearities cope with dropout. Weights are initialized with unit norm rows, as this has positive impact on each nonlinearity’s performance (Hendrycks & Gimpel, 2016; Mishkin & Matas, 2016; Saxe et al., 2014). Note that we tune over the learning rates {10−3, 10−4, 10−5} with 5k validation examples from the training set and take the median results for five runs. Using these classifiers, we demonstrate in Figure 3 that classifiers using a GELU can be more robust to noised inputs. Figure 2 shows that the GELU tends to have the lowest median training log loss with and without dropout. Consequently, although the GELU is inspired by a different stochastic process, it comports well with dropout. 3.2 MNIST AUTOENCODER We now consider a self-supervised setting and train a deep autoencoder on MNIST (Desjardins et al., 2015). To accomplish this, we use a network with layers of width 1000, 500, 250, 30, 250, 500, 1000, in that order. We again use the Adam optimizer and a batch size of 64. Our loss is the mean squared loss. We vary the learning rate from 10−3 to 10−5. We also tried a learning rate of 0.01 but ELUs diverged, and GELUs and RELUs converged poorly. The results in Figure 4 indicate the GELU accommodates different learning rates and that the GELU either ties or significantly outperforms the other nonlinearities. To save space, we show the learning curve for the 10−5 learning rate in appendix A. 3.3 TWITTER POS TAGGING Many datasets in natural language processing are relatively small, so it is important that an activation generalize well from few examples. To meet this challenge we compare the nonlinearities on POSannotated tweets (Gimpel et al., 2011; Owoputi et al., 2013) which contain 25 tags. The tweet tagger is simply a two-layer network with pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al., 2013). The input is the concatenation of the vector of the word to be tagged and those of its left and right neighboring words. Each layer has 256 neurons, a dropout keep probability of 0.8, and the network is optimized with Adam while tuning over the learning rates {10−3, 10−4, 10−5}. We train each network five times per learning rate, and the median test set error is 12.57% for the GELU, 12.67% for the ReLU, and 12.91% for the ELU. 3.4 TIMIT FRAME CLASSIFICATION Our next challenge is phone recognition with the TIMIT dataset which has recordings of 680 speakers in a noiseless environment. The system is a five-layer, 2048-neuron wide classifier as in (Mohamed et al., 2012) with 39 output phone labels and a dropout rate of 0.5 as in (Srivastava, 2013). This network takes as input 11 frames and must predict the phone of the center frame using 26 MFCC, energy, and derivative features per frame. We tune over the learning rates {10−3, 10−4, 10−5} and optimize with Adam. After five runs per setting, we obtain the median curves in Figure 5, and median test error chosen at the lowest validation error is 29.3% for the GELU, 29.5% for the ReLU, and 29.6% for the ELU. 3.5 CIFAR-10/100 CLASSIFICATION Next, we demonstrate that for more intricate architectures the GELU nonlinearity again outperforms other nonlinearities. We evaluate this activation function using CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) on shallow and deep convolutional neural networks, respectively. Our shallower convolutional neural network is a 9-layer network with the architecture and training procedure from Salimans & Kingma (2016) while using batch normalization to speed up training. The architecture is described in appendix B and recently obtained state of the art on CIFAR-10 without data augmentation. No data augmentation was used to train this network. We tune over the learning initial rates {10−3, 10−4, 10−5} with 5k validation examples then train on the whole training set again based upon the learning rate from cross validation. The network is optimized with Adam for 200 epochs, and at the 100th epoch the learning rate linearly decays to zero. Results are shown in Figure 6, and each curve is a median of three runs. Ultimately, the GELU obtains a median error rate of 7.89%, the ReLU obtains 8.16%, and the ELU obtains 8.41%. Next we consider a wide residual network on CIFAR-100 with 40 layers and a widening factor of 4 (Zagoruyko & Komodakis, 2016). We train for 50 epochs with the learning rate schedule described in (Loshchilov & Hutter, 2016) (T0 = 50, η = 0.1) with Nesterov momentum, and with a dropout keep probability of 0.7. Some have noted that ELUs have an exploding gradient with residual networks (Shah et al., 2016), and this is alleviated with batch normalization at the end of a residual block. Consequently, we use a Conv-Activation-Conv-Activation-BatchNorm block architecture to be charitable to ELUs. Over three runs we obtain the median convergence curves in Figure 7. Meanwhile, the GELU achieves a median error of 20.74%, the ReLU obtains 21.77% (without our changes described above, the original 40-4 WideResNet with a ReLU obtains 22.89% (Zagoruyko & Komodakis, 2016)), and the ELU obtains 22.98%. 4 SOI MAP EXPERIMENTS Now let us consider how well the SOI Map performs rather than the GELU, its expectation. We consider evaluating the SOI Map, or an Adaptive Dropout variant without any nonlinearity, to show that neural networks do not require traditional nonlinearities. We can expect the SOI map to perform differently from a nonlinearity plus dropout. For one, stochastic regularizers applied to composed linear maps without a deterministic nonlinearity tend to yield a regularized deep linear transformation. In the case of a single linear transformation dropout and the SOI map behave differently. To see this, recall that Wang & Manning (2013) showed that for least squares regression, if a prediction is Ŷ = ∑ i wiximi, where x is an zero-centered input, w is a zero-centered learned weight, and m is a dropout mask of zeros and ones, we have that Var(Ŷ ) = ∑ i w 2 i x 2 i p(1− p) when using dropout. Meanwhile, the SOI map has the prediction variance ∑ i w 2 i x 2 i Φ(x)(1−Φ(x)). Thus as xi increases, the variance of the prediction increases for dropout, but for the SOI map xi’s increase is dampened by the Φ(x)(1−Φ(x)) term. Then as the inputs and score gets larger, a prediction with the SOI map can have less volatility rather than more. In the experiments that follow, we confirm that the SOI map and dropout differ because the SOI map yields accuracies comparable to nonlinearities plus dropout, despite the absence of any traditional nonlinearity. We begin our experimentation by reconsidering the 8-layer MNIST classifier. We have the same training procedure except that we tune the dropout keep probability over {1, 0.75, 0.5} when using a nonlinearity. There is no dropout while using the SOI map. Meanwhile, for the SOI map we tune no additional hyperparameter. When the SOI map trains we simply mask the neurons, but during testing we use the expected transformation of the SOI map (the GELU) to make the prediction deterministic, mirroring how dropout is turned off during testing. A ReLU with dropout obtains 2.10% error, and a SOI map achieves 2.00% error. Next, we reconsider the Twitter POS tagger. We again perform the same experimentation but also tune over the dropout keep probabilities {1, 0.75, 0.5}when using a nonlinearity. In this experiment, the ReLU with dropout obtains 11.9% error, and the SOI map obtains 12.5% error. It is worth mentioning that the best dropout setting for the ReLU was when the dropout keep probability was 1, i.e., when dropout was off, so the regularization provided by the SOI map was superfluous. Finally, we turn to an earlier TIMIT experiment. Like the previous two experiments, we also tune over the dropout keep probabilities {1, 0.75, 0.5} when using a nonlinearity. Under this setup, the ReLU ties with the SOI map as both obtain 29.46% error, though the SOI map obtained its best validation loss in the 7th epoch while the ReLU with dropout did in the 27th epoch. In summary, the SOI map can be comparable to a nonlinearity with dropout and does not simply yield a regularized linear transformation. This is surprising because the SOI map is not like a traditional nonlinearity while it has a nonlinearity’s power. The upshot may be that traditional, deterministic, differentiable functions applied to a neuron’s input are less essential to the success of neural networks, since a stochastic regularizer can achieve comparable performance. 5 DISCUSSION Across several experiments, the GELU outperformed previous nonlinearities, but it bears semblance to the ReLU and ELU in other respects. For example, as σ → 0 and if µ = 0, the GELU becomes a ReLU. More, the ReLU and GELU are equal asymptotically. In fact, the GELU can be viewed as a natural way to smooth a ReLU. To see this, recall that ReLU = max(x, 0) = x1(x > 0) (where 1 is the indicator function), while the GELU is xΦ(x) if µ = 0, σ = 1. Then the CDF is a smooth approximation to the binary function the ReLU uses, like how the sigmoid smoothed binary threshold activations. Unlike the ReLU, the GELU and ELU can be both negative and positive. In fact, if we used the cumulative distribution function of the standard Cauchy distribution, then the ELU (when α = 1/π) is asymptotically equal to xP (C ≤ x), C ∼ Cauchy(0, 1) for negative values and for positive values is xP (C ≤ x) if we shift the line down by 1/π. These are some fundamental relations to previous nonlinearities. However, the GELU has several notable differences. This non-convex, non-monotonic function is not linear in the positive domain and exhibits curvature at all points. Meanwhile ReLUs and ELUs, which are convex and monotonic activations, are linear in the positive domain and thereby can lack curvature. As such, increased curvature and non-monotonicity may allow GELUs to more easily approximate complicated functions than can ReLUs or ELUs. Also, since ReLU(x) = x1(x > 0) and GELU(x) = xΦ(x) if µ = 0, σ = 1, we can see that the ReLU gates the input depending upon its sign, while the GELU weights its input depending upon how much greater it is than other inputs. In addition and significantly, the GELU has a probabilistic interpretation given that it is the expected SOI map, which combines ideas from dropout and zoneout. The SOI Map also relates to a previous stochastic regularizer called Adaptive Dropout (Ba & Frey, 2013). The crucial difference between typical adaptive dropout and the SOI map is that adaptive dropout multiplies the nonlinearity’s output by a mask, but the SOI map multiplies the neuron input by a mask. Consequently, the SOI map trains without any nonlinearity, while adaptive dropout modifies the output of a nonlinearity. In this way, standard implementations of adaptive dropout do not call into question the necessity of traditional nonlinearities since it augments a nonlinearity’s decision rather than eschews the nonlinearity entirely. We also have two practical tips for using the GELU. First we advise using an optimizer with momentum when training with a GELU, as is standard for deep neural networks. Second, using a close approximation to the cumulative distribution function of a Gaussian distribution is important. For example, using a sigmoid function σ(x) = 1/(1 + e−x) is an approximation of a cumulative distribution function of a normal distribution, but it is not a close enough approximation (Ba & Frey, 2013). Indeed, we found that a Sigmoid Linear Unit (SiLU) xσ(x) performs worse than GELUs but usually better than ReLUs and ELUs. The maximum difference between σ(x) and Φ(x) is approximately 0.1, but the difference between the two is visible in Figure 8. Instead of using a xσ(x) to approximate Φ(x), we used 0.5x(1+tanh[ √ 2/π(x+0.044715x3)]) (Choudhury, 2014).1 This is a sufficiently fast, easy-to-implement approximation which we used in every experiment in this paper. 6 CONCLUSION We observed that the GELU outperforms previous nonlinearities across tasks from computer vision, natural language processing, and automatic speech recognition. Moreover, we showed that a stochastic regularizer can compete with a nonlinearity aided by dropout, indicating that traditional nonlinearities may not be crucial to neural network architectures. This stochastic regularizer makes probabilistic decisions and the GELU is the expectation of the decision. We therefore probabilistically related the GELU to the SOI map, thereby bridging a nonlinearity to a stochastic regularizer. Now having seen that a stochastic regularizer can replace a traditional nonlinearity, we hope that future work explores the design space of other stochastic regularizers as powerful as a traditional activation aided by dropout. Furthermore, there may be fruitful modifications to the GELU in different contexts. For example, for sparser inputs, a nonlinearity of the form xP (L ≤ x), L ∼ Laplace(0, 1) may be a more effective activation. For the numerous datasets evaluated in this paper, the GELU exceeded the accuracy of the ELU and ReLU consistently, making it a viable alternative to previous nonlinearities. ACKNOWLEDGMENT We would like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research. B NEURAL NETWORK ARCHITECTURE FOR CIFAR-10 EXPERIMENTS A ADDITIONAL MNIST AUTOENCODER LEARNING CURVE
1. What is the novelty and significance of the proposed regularizer in deep learning? 2. How does the reviewer assess the performance of the proposed method compared to other related works? 3. Is there an interesting or new insight provided by the paper regarding nonlinearities and stochastic regularizers?
Review
Review The proposed regularizer seems to be a particular combination of existing methods. Though the implied connection between nonlinearities and stochastic regularizers is intriguing, in my opinion the empirical performance does not exceed the performance achieved by similar methods by a large enough margin to arrive at a meaningful conclusion.
ICLR
Title Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units Abstract We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron’s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks. N/A We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron’s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks. 1 INTRODUCTION Early artificial neurons utilized binary threshold units (Hopfield, 1982; McCulloch & Pitts, 1943). These hard binary decisions are smoothed with sigmoid activations, enabling a neuron to have a “firing rate” interpretation and to train with backpropagation. But as networks became deeper, training with sigmoid activations proved less effective than the non-smooth, less-probabilistic ReLU (Nair & Hinton, 2010) which makes hard gating decisions based upon an input’s sign. Despite having less of a statistical motivation, the ReLU remains a competitive engineering solution which often enables faster and better convergence than sigmoids. Building on the successes of ReLUs, a recent modification called ELUs (Clevert et al., 2016) allows a ReLU-like nonlinearity to output negative values which sometimes increases training speed. In all, the activation choice has remained a necessary architecture decision for neural networks lest the network be a deep linear classifier. Deep nonlinear classifiers can fit their data so well that network designers are often faced with the choice of including stochastic regularizer like adding noise to hidden layers or applying dropout (Srivastava et al., 2014), and this choice remains separate from the activation function. Some stochastic regularizers can make the network behave like an ensemble of networks, a pseudoensemble (Bachman et al., 2014), and can lead to marked accuracy increases. For example, the stochastic regularizer dropout creates a pseudoensemble by randomly altering some activation decisions through zero multiplication. Nonlinearities and dropout thus determine a neuron’s output together, yet the two innovations have remained distinct. More, neither subsumed the other because popular stochastic regularizers act irrespectively of the input and nonlinearities are aided by such regularizers. In this work, we bridge the gap between stochastic regularizers and nonlinearities. To do this, we consider an adaptive stochastic regularizer that allows for a more probabilistic view of a neuron’s output. With this stochastic regularizer we can train networks without any nonlinearity while matching the performance of activations combined with dropout. This is unlike other stochastic regularizers without any nonlinearity as they merely yield a regularized linear classifier. We also take the expected transformation of this stochastic regularizer to obtain a novel nonlinearity which matches or exceeds models with ReLUs or ELUs across tasks from computer vision, natural language processing, and automatic speech recognition. ∗Work done while the author was at TTIC. Code available at github.com/hendrycks/GELUs 2 GELUS AND THE STOCHASTIC 0-I MAP We create our stochastic regularizer and nonlinearity by combining intuitions from dropout, zoneout, and ReLUs. First note that a ReLU and dropout both yield a neuron’s output with the ReLU deterministically multiplying the input by zero or one and dropout stochastically multiplying by zero. Also, a new RNN regularizer called zoneout stochastically multiplies inputs by one (Krueger et al., 2016). We merge this functionality by multiplying the input by zero or one, but the values of this zero-one mask are stochastically determined while also dependent upon the input. Specifically, we multiply the neuron input x by m ∼ Bernoulli(Φ(x)), where Φ(x) = P (X ≤ x), X ∼ N (0, 1) is the cumulative distribution function of the standard normal distribution. The distribution Bernoulli(Φ(x)) appears in Gaussian Processes for classification (Houlsby et al., 2011) and the neuron’s output is xm giving x or 0. Thus inputs have a higher probability of being “dropped” as x decreases, so the transformation applied to x is stochastic yet depends upon the input. Masking inputs in this fashion retains nondeterminism but maintains dependency upon the input value. A stochastically chosen mask amounts to a stochastic zero or identity transformation of the input, leading us to call the regularizer the SOI map. The SOI Map is much like Adaptive Dropout (Ba & Frey, 2013), but we refer to the regularizer as the SOI Map because adaptive dropout is used in tandem with nonlinearities. In section 4, we show that simply masking linear transformations with the SOI map exceeds the power of linear classifiers and competes with nonlinearities aided by dropout, showing that nonlinearities can be replaced with stochastic regularizers. 3 GELU ReLU ELU The SOI map can be made deterministic should we desire a deterministic decision from a neural network, and this gives rise to our new nonlinearity. The nonlinearity is the expected transformation of the SOI map on an input x, which is Φ(x)×Ix+(1−Φ(x))×0x = xΦ(x). Loosely, this expression states that we scale x by how much greater it is than other inputs. We now make an obvious extension. Since the cumulative distribution function of a Gaussian is computed with the error function, we define the Gaussian Error Linear Unit (GELU) as GELU(x) = xP (X ≤ x), where X ∼ N (µ, σ2). Both µ and σ are possibly parameters to optimize, but throughout this work we simply let µ = 0 and σ = 1. Consequently, we do not introduce any new hyperparameters in the following experiments. In the next section, we show that the GELU exceeds the performance of ReLUs and ELUs across numerous tasks. 3 GELU EXPERIMENTS We evaluate the GELU, ELU, and ReLU on MNIST classification (grayscale images with 10 classes, 60k training examples and 10k test examples), MNIST autoencoding, Tweet part-of-speech tagging (1000 training, 327 validation, and 500 testing tweets), TIMIT frame recognition (3696 training, 1152 validation, and 192 test audio sentences), and CIFAR-10/100 classification (color images with 10/100 classes, 50k training and 10k test examples). We do not evaluate nonlinearities like the LReLU because of its similarity to ReLUs (see Maas et al. (2013) for a description of LReLUs). 3.1 MNIST CLASSIFICATION Let us verify that this nonlinearity competes with previous activation functions by replicating an experiment from Clevert et al. (2016). To this end, we train a fully connected neural network with GELUs (µ = 0, σ = 1), ReLUs, and ELUs (α = 1). Each 8-layer, 128 neuron wide neural network is trained for 50 epochs with a batch size of 128. This experiment differs from those of Clevert et al. in that we use the Adam optimizer (Kingma & Ba, 2015) rather than stochastic gradient descent without momentum, and we also show how well nonlinearities cope with dropout. Weights are initialized with unit norm rows, as this has positive impact on each nonlinearity’s performance (Hendrycks & Gimpel, 2016; Mishkin & Matas, 2016; Saxe et al., 2014). Note that we tune over the learning rates {10−3, 10−4, 10−5} with 5k validation examples from the training set and take the median results for five runs. Using these classifiers, we demonstrate in Figure 3 that classifiers using a GELU can be more robust to noised inputs. Figure 2 shows that the GELU tends to have the lowest median training log loss with and without dropout. Consequently, although the GELU is inspired by a different stochastic process, it comports well with dropout. 3.2 MNIST AUTOENCODER We now consider a self-supervised setting and train a deep autoencoder on MNIST (Desjardins et al., 2015). To accomplish this, we use a network with layers of width 1000, 500, 250, 30, 250, 500, 1000, in that order. We again use the Adam optimizer and a batch size of 64. Our loss is the mean squared loss. We vary the learning rate from 10−3 to 10−5. We also tried a learning rate of 0.01 but ELUs diverged, and GELUs and RELUs converged poorly. The results in Figure 4 indicate the GELU accommodates different learning rates and that the GELU either ties or significantly outperforms the other nonlinearities. To save space, we show the learning curve for the 10−5 learning rate in appendix A. 3.3 TWITTER POS TAGGING Many datasets in natural language processing are relatively small, so it is important that an activation generalize well from few examples. To meet this challenge we compare the nonlinearities on POSannotated tweets (Gimpel et al., 2011; Owoputi et al., 2013) which contain 25 tags. The tweet tagger is simply a two-layer network with pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al., 2013). The input is the concatenation of the vector of the word to be tagged and those of its left and right neighboring words. Each layer has 256 neurons, a dropout keep probability of 0.8, and the network is optimized with Adam while tuning over the learning rates {10−3, 10−4, 10−5}. We train each network five times per learning rate, and the median test set error is 12.57% for the GELU, 12.67% for the ReLU, and 12.91% for the ELU. 3.4 TIMIT FRAME CLASSIFICATION Our next challenge is phone recognition with the TIMIT dataset which has recordings of 680 speakers in a noiseless environment. The system is a five-layer, 2048-neuron wide classifier as in (Mohamed et al., 2012) with 39 output phone labels and a dropout rate of 0.5 as in (Srivastava, 2013). This network takes as input 11 frames and must predict the phone of the center frame using 26 MFCC, energy, and derivative features per frame. We tune over the learning rates {10−3, 10−4, 10−5} and optimize with Adam. After five runs per setting, we obtain the median curves in Figure 5, and median test error chosen at the lowest validation error is 29.3% for the GELU, 29.5% for the ReLU, and 29.6% for the ELU. 3.5 CIFAR-10/100 CLASSIFICATION Next, we demonstrate that for more intricate architectures the GELU nonlinearity again outperforms other nonlinearities. We evaluate this activation function using CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) on shallow and deep convolutional neural networks, respectively. Our shallower convolutional neural network is a 9-layer network with the architecture and training procedure from Salimans & Kingma (2016) while using batch normalization to speed up training. The architecture is described in appendix B and recently obtained state of the art on CIFAR-10 without data augmentation. No data augmentation was used to train this network. We tune over the learning initial rates {10−3, 10−4, 10−5} with 5k validation examples then train on the whole training set again based upon the learning rate from cross validation. The network is optimized with Adam for 200 epochs, and at the 100th epoch the learning rate linearly decays to zero. Results are shown in Figure 6, and each curve is a median of three runs. Ultimately, the GELU obtains a median error rate of 7.89%, the ReLU obtains 8.16%, and the ELU obtains 8.41%. Next we consider a wide residual network on CIFAR-100 with 40 layers and a widening factor of 4 (Zagoruyko & Komodakis, 2016). We train for 50 epochs with the learning rate schedule described in (Loshchilov & Hutter, 2016) (T0 = 50, η = 0.1) with Nesterov momentum, and with a dropout keep probability of 0.7. Some have noted that ELUs have an exploding gradient with residual networks (Shah et al., 2016), and this is alleviated with batch normalization at the end of a residual block. Consequently, we use a Conv-Activation-Conv-Activation-BatchNorm block architecture to be charitable to ELUs. Over three runs we obtain the median convergence curves in Figure 7. Meanwhile, the GELU achieves a median error of 20.74%, the ReLU obtains 21.77% (without our changes described above, the original 40-4 WideResNet with a ReLU obtains 22.89% (Zagoruyko & Komodakis, 2016)), and the ELU obtains 22.98%. 4 SOI MAP EXPERIMENTS Now let us consider how well the SOI Map performs rather than the GELU, its expectation. We consider evaluating the SOI Map, or an Adaptive Dropout variant without any nonlinearity, to show that neural networks do not require traditional nonlinearities. We can expect the SOI map to perform differently from a nonlinearity plus dropout. For one, stochastic regularizers applied to composed linear maps without a deterministic nonlinearity tend to yield a regularized deep linear transformation. In the case of a single linear transformation dropout and the SOI map behave differently. To see this, recall that Wang & Manning (2013) showed that for least squares regression, if a prediction is Ŷ = ∑ i wiximi, where x is an zero-centered input, w is a zero-centered learned weight, and m is a dropout mask of zeros and ones, we have that Var(Ŷ ) = ∑ i w 2 i x 2 i p(1− p) when using dropout. Meanwhile, the SOI map has the prediction variance ∑ i w 2 i x 2 i Φ(x)(1−Φ(x)). Thus as xi increases, the variance of the prediction increases for dropout, but for the SOI map xi’s increase is dampened by the Φ(x)(1−Φ(x)) term. Then as the inputs and score gets larger, a prediction with the SOI map can have less volatility rather than more. In the experiments that follow, we confirm that the SOI map and dropout differ because the SOI map yields accuracies comparable to nonlinearities plus dropout, despite the absence of any traditional nonlinearity. We begin our experimentation by reconsidering the 8-layer MNIST classifier. We have the same training procedure except that we tune the dropout keep probability over {1, 0.75, 0.5} when using a nonlinearity. There is no dropout while using the SOI map. Meanwhile, for the SOI map we tune no additional hyperparameter. When the SOI map trains we simply mask the neurons, but during testing we use the expected transformation of the SOI map (the GELU) to make the prediction deterministic, mirroring how dropout is turned off during testing. A ReLU with dropout obtains 2.10% error, and a SOI map achieves 2.00% error. Next, we reconsider the Twitter POS tagger. We again perform the same experimentation but also tune over the dropout keep probabilities {1, 0.75, 0.5}when using a nonlinearity. In this experiment, the ReLU with dropout obtains 11.9% error, and the SOI map obtains 12.5% error. It is worth mentioning that the best dropout setting for the ReLU was when the dropout keep probability was 1, i.e., when dropout was off, so the regularization provided by the SOI map was superfluous. Finally, we turn to an earlier TIMIT experiment. Like the previous two experiments, we also tune over the dropout keep probabilities {1, 0.75, 0.5} when using a nonlinearity. Under this setup, the ReLU ties with the SOI map as both obtain 29.46% error, though the SOI map obtained its best validation loss in the 7th epoch while the ReLU with dropout did in the 27th epoch. In summary, the SOI map can be comparable to a nonlinearity with dropout and does not simply yield a regularized linear transformation. This is surprising because the SOI map is not like a traditional nonlinearity while it has a nonlinearity’s power. The upshot may be that traditional, deterministic, differentiable functions applied to a neuron’s input are less essential to the success of neural networks, since a stochastic regularizer can achieve comparable performance. 5 DISCUSSION Across several experiments, the GELU outperformed previous nonlinearities, but it bears semblance to the ReLU and ELU in other respects. For example, as σ → 0 and if µ = 0, the GELU becomes a ReLU. More, the ReLU and GELU are equal asymptotically. In fact, the GELU can be viewed as a natural way to smooth a ReLU. To see this, recall that ReLU = max(x, 0) = x1(x > 0) (where 1 is the indicator function), while the GELU is xΦ(x) if µ = 0, σ = 1. Then the CDF is a smooth approximation to the binary function the ReLU uses, like how the sigmoid smoothed binary threshold activations. Unlike the ReLU, the GELU and ELU can be both negative and positive. In fact, if we used the cumulative distribution function of the standard Cauchy distribution, then the ELU (when α = 1/π) is asymptotically equal to xP (C ≤ x), C ∼ Cauchy(0, 1) for negative values and for positive values is xP (C ≤ x) if we shift the line down by 1/π. These are some fundamental relations to previous nonlinearities. However, the GELU has several notable differences. This non-convex, non-monotonic function is not linear in the positive domain and exhibits curvature at all points. Meanwhile ReLUs and ELUs, which are convex and monotonic activations, are linear in the positive domain and thereby can lack curvature. As such, increased curvature and non-monotonicity may allow GELUs to more easily approximate complicated functions than can ReLUs or ELUs. Also, since ReLU(x) = x1(x > 0) and GELU(x) = xΦ(x) if µ = 0, σ = 1, we can see that the ReLU gates the input depending upon its sign, while the GELU weights its input depending upon how much greater it is than other inputs. In addition and significantly, the GELU has a probabilistic interpretation given that it is the expected SOI map, which combines ideas from dropout and zoneout. The SOI Map also relates to a previous stochastic regularizer called Adaptive Dropout (Ba & Frey, 2013). The crucial difference between typical adaptive dropout and the SOI map is that adaptive dropout multiplies the nonlinearity’s output by a mask, but the SOI map multiplies the neuron input by a mask. Consequently, the SOI map trains without any nonlinearity, while adaptive dropout modifies the output of a nonlinearity. In this way, standard implementations of adaptive dropout do not call into question the necessity of traditional nonlinearities since it augments a nonlinearity’s decision rather than eschews the nonlinearity entirely. We also have two practical tips for using the GELU. First we advise using an optimizer with momentum when training with a GELU, as is standard for deep neural networks. Second, using a close approximation to the cumulative distribution function of a Gaussian distribution is important. For example, using a sigmoid function σ(x) = 1/(1 + e−x) is an approximation of a cumulative distribution function of a normal distribution, but it is not a close enough approximation (Ba & Frey, 2013). Indeed, we found that a Sigmoid Linear Unit (SiLU) xσ(x) performs worse than GELUs but usually better than ReLUs and ELUs. The maximum difference between σ(x) and Φ(x) is approximately 0.1, but the difference between the two is visible in Figure 8. Instead of using a xσ(x) to approximate Φ(x), we used 0.5x(1+tanh[ √ 2/π(x+0.044715x3)]) (Choudhury, 2014).1 This is a sufficiently fast, easy-to-implement approximation which we used in every experiment in this paper. 6 CONCLUSION We observed that the GELU outperforms previous nonlinearities across tasks from computer vision, natural language processing, and automatic speech recognition. Moreover, we showed that a stochastic regularizer can compete with a nonlinearity aided by dropout, indicating that traditional nonlinearities may not be crucial to neural network architectures. This stochastic regularizer makes probabilistic decisions and the GELU is the expectation of the decision. We therefore probabilistically related the GELU to the SOI map, thereby bridging a nonlinearity to a stochastic regularizer. Now having seen that a stochastic regularizer can replace a traditional nonlinearity, we hope that future work explores the design space of other stochastic regularizers as powerful as a traditional activation aided by dropout. Furthermore, there may be fruitful modifications to the GELU in different contexts. For example, for sparser inputs, a nonlinearity of the form xP (L ≤ x), L ∼ Laplace(0, 1) may be a more effective activation. For the numerous datasets evaluated in this paper, the GELU exceeded the accuracy of the ELU and ReLU consistently, making it a viable alternative to previous nonlinearities. ACKNOWLEDGMENT We would like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research. B NEURAL NETWORK ARCHITECTURE FOR CIFAR-10 EXPERIMENTS A ADDITIONAL MNIST AUTOENCODER LEARNING CURVE
1. What is the main contribution of the paper, and how does it differ from other related works? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its ability to train neural networks without traditional nonlinearity? 3. How does the reviewer assess the experimental results presented in the paper, especially when compared to previous works? 4. What are some potential limitations or drawbacks of the proposed approach, and how might they be addressed in future research? 5. How clear and effective are the visualizations used in the paper, and are there any suggestions for improvement?
Review
Review The method proposed essential trains neural networks without a traditional nonlinearity, using multiplicative gating by the CDF of a Gaussian evaluated at the preactivation; this is motivated as a relaxation of a probit-Bernoulli stochastic gate. Experiments are performed with both. The work is somewhat novel and interesting. Little is said about why this is preferable to other similar parameterizations of the same (sigmoidal? softsign? etc.) It would be stronger with more empirical interrogation of why this works and exploration of the nearby conceptual space. The CIFAR results look okay by today's standards but the MNIST results are quite bad, neural nets were doing better than 1.5% a decade ago and the SOI map results (and the ReLU baseline) are above 2%. (TIMIT results on frame classification also aren't that interesting without evaluating word error rate within a speech pipeline, but this is a minor point.) The idea put forth that SOI map networks without additional nonlinearities are comparable to linear functions is rather misleading as they are, in expectation, nonlinear functions of their input. Varying an input example by multiplying or adding a constant will not be linearly reflected in the expected output of the network. In this sense they are more nonlinear than ReLU networks which are at least locally linear. The plots are very difficult to read in grayscale,
ICLR
Title Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units Abstract We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron’s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks. N/A We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron’s input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connection between the GELU and the stochastic regularizer suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks. 1 INTRODUCTION Early artificial neurons utilized binary threshold units (Hopfield, 1982; McCulloch & Pitts, 1943). These hard binary decisions are smoothed with sigmoid activations, enabling a neuron to have a “firing rate” interpretation and to train with backpropagation. But as networks became deeper, training with sigmoid activations proved less effective than the non-smooth, less-probabilistic ReLU (Nair & Hinton, 2010) which makes hard gating decisions based upon an input’s sign. Despite having less of a statistical motivation, the ReLU remains a competitive engineering solution which often enables faster and better convergence than sigmoids. Building on the successes of ReLUs, a recent modification called ELUs (Clevert et al., 2016) allows a ReLU-like nonlinearity to output negative values which sometimes increases training speed. In all, the activation choice has remained a necessary architecture decision for neural networks lest the network be a deep linear classifier. Deep nonlinear classifiers can fit their data so well that network designers are often faced with the choice of including stochastic regularizer like adding noise to hidden layers or applying dropout (Srivastava et al., 2014), and this choice remains separate from the activation function. Some stochastic regularizers can make the network behave like an ensemble of networks, a pseudoensemble (Bachman et al., 2014), and can lead to marked accuracy increases. For example, the stochastic regularizer dropout creates a pseudoensemble by randomly altering some activation decisions through zero multiplication. Nonlinearities and dropout thus determine a neuron’s output together, yet the two innovations have remained distinct. More, neither subsumed the other because popular stochastic regularizers act irrespectively of the input and nonlinearities are aided by such regularizers. In this work, we bridge the gap between stochastic regularizers and nonlinearities. To do this, we consider an adaptive stochastic regularizer that allows for a more probabilistic view of a neuron’s output. With this stochastic regularizer we can train networks without any nonlinearity while matching the performance of activations combined with dropout. This is unlike other stochastic regularizers without any nonlinearity as they merely yield a regularized linear classifier. We also take the expected transformation of this stochastic regularizer to obtain a novel nonlinearity which matches or exceeds models with ReLUs or ELUs across tasks from computer vision, natural language processing, and automatic speech recognition. ∗Work done while the author was at TTIC. Code available at github.com/hendrycks/GELUs 2 GELUS AND THE STOCHASTIC 0-I MAP We create our stochastic regularizer and nonlinearity by combining intuitions from dropout, zoneout, and ReLUs. First note that a ReLU and dropout both yield a neuron’s output with the ReLU deterministically multiplying the input by zero or one and dropout stochastically multiplying by zero. Also, a new RNN regularizer called zoneout stochastically multiplies inputs by one (Krueger et al., 2016). We merge this functionality by multiplying the input by zero or one, but the values of this zero-one mask are stochastically determined while also dependent upon the input. Specifically, we multiply the neuron input x by m ∼ Bernoulli(Φ(x)), where Φ(x) = P (X ≤ x), X ∼ N (0, 1) is the cumulative distribution function of the standard normal distribution. The distribution Bernoulli(Φ(x)) appears in Gaussian Processes for classification (Houlsby et al., 2011) and the neuron’s output is xm giving x or 0. Thus inputs have a higher probability of being “dropped” as x decreases, so the transformation applied to x is stochastic yet depends upon the input. Masking inputs in this fashion retains nondeterminism but maintains dependency upon the input value. A stochastically chosen mask amounts to a stochastic zero or identity transformation of the input, leading us to call the regularizer the SOI map. The SOI Map is much like Adaptive Dropout (Ba & Frey, 2013), but we refer to the regularizer as the SOI Map because adaptive dropout is used in tandem with nonlinearities. In section 4, we show that simply masking linear transformations with the SOI map exceeds the power of linear classifiers and competes with nonlinearities aided by dropout, showing that nonlinearities can be replaced with stochastic regularizers. 3 GELU ReLU ELU The SOI map can be made deterministic should we desire a deterministic decision from a neural network, and this gives rise to our new nonlinearity. The nonlinearity is the expected transformation of the SOI map on an input x, which is Φ(x)×Ix+(1−Φ(x))×0x = xΦ(x). Loosely, this expression states that we scale x by how much greater it is than other inputs. We now make an obvious extension. Since the cumulative distribution function of a Gaussian is computed with the error function, we define the Gaussian Error Linear Unit (GELU) as GELU(x) = xP (X ≤ x), where X ∼ N (µ, σ2). Both µ and σ are possibly parameters to optimize, but throughout this work we simply let µ = 0 and σ = 1. Consequently, we do not introduce any new hyperparameters in the following experiments. In the next section, we show that the GELU exceeds the performance of ReLUs and ELUs across numerous tasks. 3 GELU EXPERIMENTS We evaluate the GELU, ELU, and ReLU on MNIST classification (grayscale images with 10 classes, 60k training examples and 10k test examples), MNIST autoencoding, Tweet part-of-speech tagging (1000 training, 327 validation, and 500 testing tweets), TIMIT frame recognition (3696 training, 1152 validation, and 192 test audio sentences), and CIFAR-10/100 classification (color images with 10/100 classes, 50k training and 10k test examples). We do not evaluate nonlinearities like the LReLU because of its similarity to ReLUs (see Maas et al. (2013) for a description of LReLUs). 3.1 MNIST CLASSIFICATION Let us verify that this nonlinearity competes with previous activation functions by replicating an experiment from Clevert et al. (2016). To this end, we train a fully connected neural network with GELUs (µ = 0, σ = 1), ReLUs, and ELUs (α = 1). Each 8-layer, 128 neuron wide neural network is trained for 50 epochs with a batch size of 128. This experiment differs from those of Clevert et al. in that we use the Adam optimizer (Kingma & Ba, 2015) rather than stochastic gradient descent without momentum, and we also show how well nonlinearities cope with dropout. Weights are initialized with unit norm rows, as this has positive impact on each nonlinearity’s performance (Hendrycks & Gimpel, 2016; Mishkin & Matas, 2016; Saxe et al., 2014). Note that we tune over the learning rates {10−3, 10−4, 10−5} with 5k validation examples from the training set and take the median results for five runs. Using these classifiers, we demonstrate in Figure 3 that classifiers using a GELU can be more robust to noised inputs. Figure 2 shows that the GELU tends to have the lowest median training log loss with and without dropout. Consequently, although the GELU is inspired by a different stochastic process, it comports well with dropout. 3.2 MNIST AUTOENCODER We now consider a self-supervised setting and train a deep autoencoder on MNIST (Desjardins et al., 2015). To accomplish this, we use a network with layers of width 1000, 500, 250, 30, 250, 500, 1000, in that order. We again use the Adam optimizer and a batch size of 64. Our loss is the mean squared loss. We vary the learning rate from 10−3 to 10−5. We also tried a learning rate of 0.01 but ELUs diverged, and GELUs and RELUs converged poorly. The results in Figure 4 indicate the GELU accommodates different learning rates and that the GELU either ties or significantly outperforms the other nonlinearities. To save space, we show the learning curve for the 10−5 learning rate in appendix A. 3.3 TWITTER POS TAGGING Many datasets in natural language processing are relatively small, so it is important that an activation generalize well from few examples. To meet this challenge we compare the nonlinearities on POSannotated tweets (Gimpel et al., 2011; Owoputi et al., 2013) which contain 25 tags. The tweet tagger is simply a two-layer network with pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al., 2013). The input is the concatenation of the vector of the word to be tagged and those of its left and right neighboring words. Each layer has 256 neurons, a dropout keep probability of 0.8, and the network is optimized with Adam while tuning over the learning rates {10−3, 10−4, 10−5}. We train each network five times per learning rate, and the median test set error is 12.57% for the GELU, 12.67% for the ReLU, and 12.91% for the ELU. 3.4 TIMIT FRAME CLASSIFICATION Our next challenge is phone recognition with the TIMIT dataset which has recordings of 680 speakers in a noiseless environment. The system is a five-layer, 2048-neuron wide classifier as in (Mohamed et al., 2012) with 39 output phone labels and a dropout rate of 0.5 as in (Srivastava, 2013). This network takes as input 11 frames and must predict the phone of the center frame using 26 MFCC, energy, and derivative features per frame. We tune over the learning rates {10−3, 10−4, 10−5} and optimize with Adam. After five runs per setting, we obtain the median curves in Figure 5, and median test error chosen at the lowest validation error is 29.3% for the GELU, 29.5% for the ReLU, and 29.6% for the ELU. 3.5 CIFAR-10/100 CLASSIFICATION Next, we demonstrate that for more intricate architectures the GELU nonlinearity again outperforms other nonlinearities. We evaluate this activation function using CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) on shallow and deep convolutional neural networks, respectively. Our shallower convolutional neural network is a 9-layer network with the architecture and training procedure from Salimans & Kingma (2016) while using batch normalization to speed up training. The architecture is described in appendix B and recently obtained state of the art on CIFAR-10 without data augmentation. No data augmentation was used to train this network. We tune over the learning initial rates {10−3, 10−4, 10−5} with 5k validation examples then train on the whole training set again based upon the learning rate from cross validation. The network is optimized with Adam for 200 epochs, and at the 100th epoch the learning rate linearly decays to zero. Results are shown in Figure 6, and each curve is a median of three runs. Ultimately, the GELU obtains a median error rate of 7.89%, the ReLU obtains 8.16%, and the ELU obtains 8.41%. Next we consider a wide residual network on CIFAR-100 with 40 layers and a widening factor of 4 (Zagoruyko & Komodakis, 2016). We train for 50 epochs with the learning rate schedule described in (Loshchilov & Hutter, 2016) (T0 = 50, η = 0.1) with Nesterov momentum, and with a dropout keep probability of 0.7. Some have noted that ELUs have an exploding gradient with residual networks (Shah et al., 2016), and this is alleviated with batch normalization at the end of a residual block. Consequently, we use a Conv-Activation-Conv-Activation-BatchNorm block architecture to be charitable to ELUs. Over three runs we obtain the median convergence curves in Figure 7. Meanwhile, the GELU achieves a median error of 20.74%, the ReLU obtains 21.77% (without our changes described above, the original 40-4 WideResNet with a ReLU obtains 22.89% (Zagoruyko & Komodakis, 2016)), and the ELU obtains 22.98%. 4 SOI MAP EXPERIMENTS Now let us consider how well the SOI Map performs rather than the GELU, its expectation. We consider evaluating the SOI Map, or an Adaptive Dropout variant without any nonlinearity, to show that neural networks do not require traditional nonlinearities. We can expect the SOI map to perform differently from a nonlinearity plus dropout. For one, stochastic regularizers applied to composed linear maps without a deterministic nonlinearity tend to yield a regularized deep linear transformation. In the case of a single linear transformation dropout and the SOI map behave differently. To see this, recall that Wang & Manning (2013) showed that for least squares regression, if a prediction is Ŷ = ∑ i wiximi, where x is an zero-centered input, w is a zero-centered learned weight, and m is a dropout mask of zeros and ones, we have that Var(Ŷ ) = ∑ i w 2 i x 2 i p(1− p) when using dropout. Meanwhile, the SOI map has the prediction variance ∑ i w 2 i x 2 i Φ(x)(1−Φ(x)). Thus as xi increases, the variance of the prediction increases for dropout, but for the SOI map xi’s increase is dampened by the Φ(x)(1−Φ(x)) term. Then as the inputs and score gets larger, a prediction with the SOI map can have less volatility rather than more. In the experiments that follow, we confirm that the SOI map and dropout differ because the SOI map yields accuracies comparable to nonlinearities plus dropout, despite the absence of any traditional nonlinearity. We begin our experimentation by reconsidering the 8-layer MNIST classifier. We have the same training procedure except that we tune the dropout keep probability over {1, 0.75, 0.5} when using a nonlinearity. There is no dropout while using the SOI map. Meanwhile, for the SOI map we tune no additional hyperparameter. When the SOI map trains we simply mask the neurons, but during testing we use the expected transformation of the SOI map (the GELU) to make the prediction deterministic, mirroring how dropout is turned off during testing. A ReLU with dropout obtains 2.10% error, and a SOI map achieves 2.00% error. Next, we reconsider the Twitter POS tagger. We again perform the same experimentation but also tune over the dropout keep probabilities {1, 0.75, 0.5}when using a nonlinearity. In this experiment, the ReLU with dropout obtains 11.9% error, and the SOI map obtains 12.5% error. It is worth mentioning that the best dropout setting for the ReLU was when the dropout keep probability was 1, i.e., when dropout was off, so the regularization provided by the SOI map was superfluous. Finally, we turn to an earlier TIMIT experiment. Like the previous two experiments, we also tune over the dropout keep probabilities {1, 0.75, 0.5} when using a nonlinearity. Under this setup, the ReLU ties with the SOI map as both obtain 29.46% error, though the SOI map obtained its best validation loss in the 7th epoch while the ReLU with dropout did in the 27th epoch. In summary, the SOI map can be comparable to a nonlinearity with dropout and does not simply yield a regularized linear transformation. This is surprising because the SOI map is not like a traditional nonlinearity while it has a nonlinearity’s power. The upshot may be that traditional, deterministic, differentiable functions applied to a neuron’s input are less essential to the success of neural networks, since a stochastic regularizer can achieve comparable performance. 5 DISCUSSION Across several experiments, the GELU outperformed previous nonlinearities, but it bears semblance to the ReLU and ELU in other respects. For example, as σ → 0 and if µ = 0, the GELU becomes a ReLU. More, the ReLU and GELU are equal asymptotically. In fact, the GELU can be viewed as a natural way to smooth a ReLU. To see this, recall that ReLU = max(x, 0) = x1(x > 0) (where 1 is the indicator function), while the GELU is xΦ(x) if µ = 0, σ = 1. Then the CDF is a smooth approximation to the binary function the ReLU uses, like how the sigmoid smoothed binary threshold activations. Unlike the ReLU, the GELU and ELU can be both negative and positive. In fact, if we used the cumulative distribution function of the standard Cauchy distribution, then the ELU (when α = 1/π) is asymptotically equal to xP (C ≤ x), C ∼ Cauchy(0, 1) for negative values and for positive values is xP (C ≤ x) if we shift the line down by 1/π. These are some fundamental relations to previous nonlinearities. However, the GELU has several notable differences. This non-convex, non-monotonic function is not linear in the positive domain and exhibits curvature at all points. Meanwhile ReLUs and ELUs, which are convex and monotonic activations, are linear in the positive domain and thereby can lack curvature. As such, increased curvature and non-monotonicity may allow GELUs to more easily approximate complicated functions than can ReLUs or ELUs. Also, since ReLU(x) = x1(x > 0) and GELU(x) = xΦ(x) if µ = 0, σ = 1, we can see that the ReLU gates the input depending upon its sign, while the GELU weights its input depending upon how much greater it is than other inputs. In addition and significantly, the GELU has a probabilistic interpretation given that it is the expected SOI map, which combines ideas from dropout and zoneout. The SOI Map also relates to a previous stochastic regularizer called Adaptive Dropout (Ba & Frey, 2013). The crucial difference between typical adaptive dropout and the SOI map is that adaptive dropout multiplies the nonlinearity’s output by a mask, but the SOI map multiplies the neuron input by a mask. Consequently, the SOI map trains without any nonlinearity, while adaptive dropout modifies the output of a nonlinearity. In this way, standard implementations of adaptive dropout do not call into question the necessity of traditional nonlinearities since it augments a nonlinearity’s decision rather than eschews the nonlinearity entirely. We also have two practical tips for using the GELU. First we advise using an optimizer with momentum when training with a GELU, as is standard for deep neural networks. Second, using a close approximation to the cumulative distribution function of a Gaussian distribution is important. For example, using a sigmoid function σ(x) = 1/(1 + e−x) is an approximation of a cumulative distribution function of a normal distribution, but it is not a close enough approximation (Ba & Frey, 2013). Indeed, we found that a Sigmoid Linear Unit (SiLU) xσ(x) performs worse than GELUs but usually better than ReLUs and ELUs. The maximum difference between σ(x) and Φ(x) is approximately 0.1, but the difference between the two is visible in Figure 8. Instead of using a xσ(x) to approximate Φ(x), we used 0.5x(1+tanh[ √ 2/π(x+0.044715x3)]) (Choudhury, 2014).1 This is a sufficiently fast, easy-to-implement approximation which we used in every experiment in this paper. 6 CONCLUSION We observed that the GELU outperforms previous nonlinearities across tasks from computer vision, natural language processing, and automatic speech recognition. Moreover, we showed that a stochastic regularizer can compete with a nonlinearity aided by dropout, indicating that traditional nonlinearities may not be crucial to neural network architectures. This stochastic regularizer makes probabilistic decisions and the GELU is the expectation of the decision. We therefore probabilistically related the GELU to the SOI map, thereby bridging a nonlinearity to a stochastic regularizer. Now having seen that a stochastic regularizer can replace a traditional nonlinearity, we hope that future work explores the design space of other stochastic regularizers as powerful as a traditional activation aided by dropout. Furthermore, there may be fruitful modifications to the GELU in different contexts. For example, for sparser inputs, a nonlinearity of the form xP (L ≤ x), L ∼ Laplace(0, 1) may be a more effective activation. For the numerous datasets evaluated in this paper, the GELU exceeded the accuracy of the ELU and ReLU consistently, making it a viable alternative to previous nonlinearities. ACKNOWLEDGMENT We would like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research. B NEURAL NETWORK ARCHITECTURE FOR CIFAR-10 EXPERIMENTS A ADDITIONAL MNIST AUTOENCODER LEARNING CURVE
1. How does the proposed approach differ from existing methods like adaptive dropout? 2. What are the strengths and weaknesses of the proposed approach compared to other methods in terms of functionality and experimental validation? 3. Is the proposed approach a novel solution or an improvement over existing methods? 4. How does the reviewer assess the significance and impact of the paper's contributions? 5. Are there any concerns or suggestions for improving the paper's content or research methodology?
Review
Review Approaches like adaptive dropout also have the binary mask as a function of input to a neuron very similar to the proposed approach. It is not clear, even from the new draft, how the proposed approach differs to Adaptive dropout in terms of functionality. The experimental validation is also not extensive since comparison to SOTA is not included.
ICLR
Title Scenario-based Question Answering with Interacting Contextual Properties Abstract In the scenario-based Question Answering (QA) task, models are asked to find answers that are appropriate to the user scenarios associated with the question and identify information that is missing from the scenarios but is necessary for the answers to hold. Scenarios commonly include multiple properties of users, such as age, employment status, and income level for the question “How much can I claim from this benefit”. The properties relevant to a potential answer are given in a document, which will state conditions necessary for the answer to hold. Documents also may specify how conditions interact with each other, e.g. with text like “one of the conditions below must apply”. Although understanding the relationship between conditions is crucial for solving this challenging QA task, limited work has been done so far in modeling this. In this paper, we propose the T-Reasoner model, which solves this problem with three jointly learned modules: an entailment module which checks whether a condition has been satisfied by the scenario, a decoding module which locates eligible answers from documents, and a reasoning module which infers the relationship between conditions and performs a reasoning step to determine the logically consistent answers and identify missing conditions. T-Reasoner outperforms strong baselines on a synthetic scenariobased QA dataset and achieves a new state-of-the-art on two scenario-based QA benchmarks, outperforming the prior best models by 3-10 points. 1 1 INTRODUCTION Many questions can only be answered correctly after some context for the question is supplied or inferred: e.g., “When is the next LA Lakers home game” needs temporal context, and “Where is the closest pizza place” needs geographical context. Prior work on contextual QA (Zhang & Choi, 2021; Dhingra et al., 2021; Kasai et al., 2022; Chen et al., 2021) has focused on tasks in which context is important, but limited: generally a small number of properties of the user that posed the question need be considered (e.g., location and time). However, many important questions depend on many more properties of the user. In this paper we consider scenario-based QA, in which questions are augmented with a textual “scenario” that describes some properties of the user. For example, in Figure 1 a user has posed a question “how much support am I eligible for?” , and the answer depends on multiple user properties (namely, their relationship with deceased, and whether they or other relatives have claimed other benefits.) Having multiple contextual properties means these properties can interact. For example, in Figure 1 the answer depends on a conjunction of conditions (e.g. “if both” in Scenario 1) and also a disjunction of conditions (e.g. either being a “relative” or a “close friend” in Scenario 2). In our benchmarks, scenarios are informative but not complete, so the goal of the system is to identify possible answers—i.e., answers that are logically consistent with the scenario—as well as any conditions that necessary for the answer to hold which are not entailed by the scenario. For example, in Figure 1 Scenario 1, the system should provide the answer “up to $1200” but must also note that the condition “you didn’t claim other benefits” is required by the answer, and not entailed by the scenario. We refer to such conditions as unsatisfied conditions. This task is challenging because in addition to finding eligible answers from documents, it also requires models to perform two non-trivial reasoning tasks. First, it must understand the document well enough to understand conditions given as 1Codes and data are available at https://github.com/haitian-sun/T-Reasoner. context for the answer (each property that may affect the answer is considered as a condition), and the logical relationship between these conditions. For example, in Figure 1 Scenario 1, it requires both “the partner of the deceased...” and “you didn’t claim other benefits” to be satisfied (i.e. conjunction), while it requires either a “relative” or “close friend” (i.e. disjunction) in Scenario 2. Second, a model must identify which conditions are entailed by information provided in user scenarios, which are contradicted, and which are not mentioned but are required to support an eligible answer. Previous work by Clark et al. (2020b) has shown that pretrained Language Models (LMs), e.g. RoBERTa (Liu et al., 2019), can be finetuned to perform a similar reasoning task over hypothetical statements, i.e. “if A and B then C”. However, conditions used in their experiments are over simplified and sometimes semantically incorrect, e.g. A = “Mike is strong” and B = “Cindy is green”. Furthermore, languages used to described the relationship between conditions are easy, and the number of conditions involved in the reasoning process is small. All factors above make the proposed task easy for existing models (Liu et al., 2019; Raffel et al., 2019), but under-represents the challenges exists in real problems that require reasoning with logically interacting conditions. Furthermore, previous work (Clark et al., 2020b) makes an assumption that every conditions must be either satisfied or contradicted by the evidence provided in questions. As a result, no “unsatisfied condition” is required in predictions. We do not make such assumption, but instead only provide evidences for a subset of conditions, and ask models to predict a logically consistent answer and identify conditions that are required but not yet satisfied, i.e. unsatisfied conditions. Indeed, experiments (Sun et al., 2021a) show that pretrained language models (LMs), e.g. T5 (Raffel et al., 2019), struggle to predict unsatisfied conditions. Even though an additional module is specifically trained to predict unsatisfied conditions (Gao et al., 2020b; Ouyang et al., 2020), their performance is still limited. We propose a simple yet effective model, T-Reasoner, which models the relationship between conditions and performs the reasoning task to verify answers that are consistent with user scenarios and identify conditions that are unsatisfied. T-Reasoner contains three main modules, an entailment module, a reasoning module, and a decoding module, which are jointly trained. The entailment module predicts whether conditions have been entailed or contradicted by users’ scenarios. The reasoning module infers the relationship between conditions then performs a reasoning step to decide whether the provided information in user scenarios is sufficient and to identify unsatisfied conditions otherwise. If the answer is a free-form text span, T-Reasoner additionally uses a generation module to predict the answer span. T-Reasoner shows excellent reasoning ability on a synthetic dataset and outperforms the previous state-of-the-art models on two Question Answering (QA) datasets, ConditionalQA and ShARC (Sun et al., 2021a; Saeidi et al., 2018), improving the state-of-the-art by 3-10 points on answer and unsatisfied condition prediction tasks. 2 RELATED WORK The task proposed by Clark et al. (2020b) is commonly referred to as deductive reasoning where all information required to find a definite answer is provided. Other models have been developed for deductive reasoning with symbolic rules (Cohen, 2016; Cohen et al., 2020; Sun et al., 2020; Ren et al., 2020; Ren & Leskovec, 2020). Embedding-based methods (Sun et al., 2020; Ren et al., 2020; Ren & Leskovec, 2020) first convert symbolic facts and rules to embeddings and then apply neural network layers on top to softly predict answers. These models differ from our work in that the symbolic structure of the rules is typically known, whereas in our model it is implicit in a document. Other recent work in deductive reasoning focused on tasks where rules and facts are expressed in natural language (Talmor et al., 2020; Saeed et al., 2021; Clark et al., 2020b; Kassner et al., 2020). Such tasks are more challenging because the model has to first understand the logic described in the natural language sentences before performing logical reasoning. Many of these models rely on rules that are produced by templates, or templated rules that have been paraphrased by crowd workers. In our work, the logical interactions analogous to these rules are implicit in real-world documents. Different from most reasoning tasks, the task considered in this paper provides a list of conditions that, if true, would support an answer. Identifying such conditions is usually called abductive reasoning, as opposed to deductive reasoning. Very limited work has explored abductive reasoning for QA. Previous work (Gao et al., 2020a;b; Ouyang et al., 2020) on the ShARC (Saeidi et al., 2018) dataset propose to solve this problem by predicting a special label “inquire” if there was not enough information to make a definite prediction. Specifically, EMT and DISCERN (Gao et al., 2020a;b) computed an entailment vector for each condition and performed a weighted sum of those vectors to predict the final answer. DGM (Ouyang et al., 2020) additionally introduced a GCN-based model to better represent the entailment vectors. Even though these models were able to predict the answer labels as “inquire” when there were unsatisfied conditions, none of them predict which conditions needed to be further satisfied, unlike our model. Our model is also more scalable than these, as it does not require concatenating a full context and a question. 3 MODEL 3.1 TASK: QA WITH CONDITIONS The scenario-based QA task requires models to find answers that are logically consistent with the provided user scenarios which are potentially incomplete. In this paper, we consider this task in the reading comprehension (RC) setting in which a passage that contains relevant information about the question is provided. We leave the open-domain setting of this problem for future work. Specifically, a model takes a question, a scenario, and a passage that contains answers and conditions as input and predicts logically consistent answers and their unsatisfied conditions. Let’s consider a passage that contains a set of conditions C = {c1, . . . , cn} and the set of eligible answers for a question under all possible combinations of conditions A = {a1, . . . , am}. Each answer ai ∈ A is restricted by a subset of conditions Ci ⊆ C. Conditions in Ci interact with each other under relationship Ri (Ri is an abstract set which will not be explicitly expressed). A condition group, Gi = (Ci, Ri) is a pair of Ci and Ri, which describes in what scenario the answer ai is correct. Note that the list of answers A, sets of conditions Ci’s and their relationship Ri’s are not explicitly provided in training and testing examples – models have to generate them from the passage. We say that a condition group Gi is satisfied if its underlying logical statement that consists of Ci and Ri has been satisfied by the scenario, for example, in Scenario 2 in Figure 1 where the condition group for “up to $800” has been satisfied. Besides being satisfied, a condition group Gi has two more possible outcomes: (1) G is partially satisfied if some of the conditions have been satisfied but there is still some information missing so the answer is not fully supported, e.g. the condition group of the answer “up to $1200” in Scenario 1 (Figure 1), and (2) Gi is contradicted if one or more conditions in the group are contradicted which causes the answer ineligible, e.g. the condition group of the answer “up to $1200” in Scenario 2 (Figure 1). An answer ai is logically consistent with the scenario if the underlying condition group Gi is satisfied or partially satisfied. We denote the set of logically consistent answers à ⊆ A. The set à contains zero or more answers – the set à is empty if none of the answers in A is logically consistent with the user scenario. A model should predict an answer from à if à is not empty, and mark the question as not answerable, otherwise.2 In addition to predicting logically consistent answers, we also perform the task of finding unsatisfied conditions C̃i. The set C̃i should be concise, i.e. it should only include the conditions that are necessary. For example, the condition “have worked for more than 4 years” is not an unsatisfied condition because whether it has been satisfied or not won’t affect the output of the condition group. In summary, we evaluate a model’s prediction of a logically consistent answer ai ∈ à and the set of unsatisfied conditions C̃i for answer ai, i.e. (ai, C̃i). Answers and unsatisfied conditions in the output are jointly evaluated.3 This task specifically challenges models’ ability in understanding the relationship between conditions and performing logical reasoning process accordingly. We will introduce a simple and effective model, T-Reasoner, to tackle this challenging reasoning task. 3.2 MODEL In this section, we will discuss T-Reasoner which consists of an entailment module, a reasoning module, and optionally a decoding module, to perform this challenging QA task in embedding space. Input The model, T-Reasoner, takes a question q with scenario e and a passage p as inputs and predicts an answer ai that is logically consistent with the user scenario and a list of unsatisfied conditions C̃i. Since the list of all conditions C for the question are not provided in the example, we chunk the passage p into pieces and consider each piece of text as a condition ci. Conditions obtained this way may be irrelevant to questions. We rely on the entailment module (see next) to decide whether a condition ci is relevant and what is its relationship with others. The chunking strategy may be different for different datasets. Please see §4.2 and §4.3 for more information. Briefly, passages are usually chunked into sentences, short passages with 2-3 sentences, or sub-sentences (text phrases). Entailment Module We apply an entailment module to check whether each condition ci ∈ C have been entailed by the user scenario. Each condition ci is checked independently, as opposed to concatenating all conditions into a long input and checking them all at once. This strategy significantly reduce the computation cost compared to checking all conditions at once, especially if context is long, e.g. legal documents which are tens or hundreds of pages long (see examples in 4.3). Specifically, 2An oracle model should be able to predict all answers from Ã. We consider a slightly simplified setting in this paper in which a model is only required to predict one of the answers. In our experiments, the ShARC (Saeidi et al., 2018) dataset only contains questions that have a single answer, i.e. |Ã| = 1. The ConditionalQA (Sun et al., 2021a) dataset contains questions that have multiple answers, |Ã| > 1, so the performance will be sacrificed. We leave the task of predicting all logically consistent answers as future work. 3Evaluation metrics are different in different datasets Sun et al. (2021a); Saeidi et al. (2018). Please refer to §4.3 and 4.2 for more details. the computation complexity of our approach is O(|C|) where |C| is the number of total conditions, compared to a complexity of O(|C|2) otherwise. This independent checking strategy, however, separates each condition from its context and thus causes a lost of contextual information for conditions ci and eventually negatively impacts the model’s performance. Thus, we extend a condition ci by adding tokens from its surroundings. For example, the condition “the partner of the deceased when they died” is expanded to “... up to $1200 may be awarded if both: <CDT> the partner of the deceased when they died <\CDT> you didn’t claim ...”, where <CDT> and <\CDT> are two special tokens that mark the beginning and end of the condition ci. Apart from making a condition ci more fluent and coherent, the added contextual tokens also make it easier to understand the relationship between the current condition ci and other conditions in its neighbours. We may additionally add page titles, section titles, prompts of list items, or table headings, etc., if applicable to the expanded conditions. Please see §4.3 and 4.2 for more details. We denote conditions with extended contextual information as si for condition ci. We learn a Transformer model for the entailment module which takes an expanded condition si and the question q and scenario e as input, and returns a list of vectors si,hi,1, . . . ,hi,m. The first vector si is a summarization vector which includes several aspects of information: (1) whether the underlying condition ci has been satisfied, contradicted, or not mentioned by the user scenario, (2) whether the condition ci is relevant to the question, and (3) if relevant, what is its relationship with other conditions in its neighbours. These information will be used for reasoning in the future layers. Embeddings hi,1, . . . ,hi,m are token embeddings that will used for decoding if needed. Please see the description of the reasoning module for more information. si,hi,1, . . . ,hi,m = Entail(si, e, q) (1) One may consider supervising this entailment module by adding classification layers on si to explicitly predict the entailment status of condition ci and its relationship with other conditions. However, obtaining supervision labels for these auxiliary tasks can be challenging as they are often not provided in the example. Fortunately, we show that our proposed model, T-Reasoner, can be trained end-to-end, without such intermediate supervision. Decoding Module The decoding module generates an eligible answer âi which is potentially logically consistent to the question. The generated answer âi will not be returned until the status of its condition group Ĝi is verified by the reasoning module (discussed below). The decoding module is analogous to FiD (Izacard & Grave, 2020), i.e. token embeddings from different conditions (which are encoded separately) are concatenated for decoding. Different from Izacard & Grave (2020) which was applied to independently retrieved passages for open-domain QA, the decoding module in T-Reasoner is used on coherent content, i.e. conditions from the same passage. The contextual information in the expanded condition si helps connect conditions that are separately encoded. The decoding module takes token embeddings for all conditions h1,1, . . . ,hn,m computed from Eq. 1 to generate answer spans. The generation task is trained with teacher forcing. We do not write out the teacher forcing decoding loss ldecode here. Please refer to the T5 paper (Raffel et al., 2019) for more information. If questions have multiple logically consistent answers, i.e. à > 1, we randomly select an answer ai ∈ à as the label to train the decoding module. âi = Decode(h (1) 1,1, . . . ,h (n) kn,m ) (2) We consider two different types of answers: “Yes”/“No” or free-form answers. In the first case, we simply let the model generate a special a special token [YESNO] and consider the reasoning result from the reasoning module (see next) as the answer, i.e. the answer is “Yes” if the condition group is satisfied (or partially satisfied) or “No” if contradicted. Since some datasets only contain “Yes”/“No” questions, we can then safely discard the decoding module for these datasets. In the second case, i.e. answers are free-form text spans, we will return generated spans as answers only if their condition groups have been verified as satisfied or partially satisfied by the reasoning module. If the condition group is contradicted, we will mark the question as not answerable. Reasoning Module The reasoning module combines the local relationship between conditions from their embeddings s1, . . . , sn and performs a logical reasoning process to decide the reasoning result for a condition group Gi for the generated answer âi and to identify unsatisfied conditions C̃i. The input to the reasoning module is a list of vectors, s1, . . . , sn for conditions c1, . . . , cn, that are output by the entailment module (Eq. 1). We use another Transformer model as our reasoner, because Transformers have the self attention mechanism which allows conditions {s1, . . . , sn} to attend to each other, so the reasoning result of a condition group can be summarized. This is crucial because, for example, if one of the conditions in a disjunction group is satisfied, the condition group will be automatically satisfied regardless the status of other conditions in the same group. We prepend a trainable vector s0 to the list of condition embeddings to summarize the reasoning result. The output vectors ŝ0, ŝ1, . . . , ŝn will be used to predict the status of the condition group and the unsatisfied conditions for the generated answer. The first vector ŝ0 will be used to predict the reasoning result of the condition group. If the condition group is partially satisfied, we use the rest of vectors, ŝ1, . . . , ŝn, to identify unsatisfied conditions. We compute loss on both reasoning and unsatisfied condition predictions. Let Ir and Ic be the one-hot labels for the two tasks. ŝ0, ŝ1, . . . , ŝn = Reason(s0, s1, . . . , sn) lreason = softmax_cross_entropy(WTl ŝ0, Ir) lcond = softmax_cross_entropy(WTc ŝi, Ic) As discussed above (§3.1), the reasoning results of condition groups have three possible outcomes: “satisfied”, “partially satisfied”, and “contradicted”. We merge the first two into one label “satisfied”, and differentiate them by whether unsatisfied conditions exist, i.e. r ∈ {satisfied, contradicted} and its one-hot label Ir ∈ {0, 1}2.4 Labels for conditions are “entailed”, “contradicted”, “not mentioned”, “implied”, and “unsatisfied”, i.e. Ic ∈ {0, 1}5. The first three labels are as they are named. The label “implied” means a condition is implied by other conditions in the condition group. For example, if one of the conditions in a disjunction group has been satisfied, the rest of conditions are “implied”. The class “unsatisfied” means it is an unsatisfied condition which must be returned together with the predicted answer. The labels may not apply to all datasets, e.g. ConditionalQA (Sun et al., 2021a) only annotates two labels “unsatisfied” vs. others, we will make changes to the loss function accordingly. Loss Function We jointly train the entailment module and reasoning module. The final loss function is the sum of the answer loss lreason and the condition entailment loss lcond. If the answers contain text spans, we jointly train the decoding module ldecode as well. l = lreason + lcond l = lreason + lcond + ldecode 3.3 FINETUNE PRETRAINED CHECKPOINTS The entailment module and decoding module (if adopted) load pretrained LM checkpoints, e.g. T5 (Raffel et al., 2019) and BART (Lewis et al., 2019). The pretrained parameters are loaded for the entailment module and then finetuned for downstream tasks. The reasoning module is randomly initialized and jointly trained with other modules. The number of Transformer layers in the reasoning module is a hyper-parameter. We choose the number of layers l = 3 or l = 4. Please see §4.1 for ablation study on the number of Transformer layers for the reasoning task. The decoding module is also finetuned. If a decoding module is needed, we will initialize the entailment and decoding module from the same pretrained checkpoint. 4 EXPERIMENTS We experiment with T-Reasoner on a synthetic dataset, CondNLI, and two benchmark QA datasets, ConditionalQA (Sun et al., 2021a) and ShARC (Saeidi et al., 2018), for scenario-based QA task. 4.1 CONDNLI Dataset The synthetic CondNLI dataset is derived from an existing Natural Language Inference (NLI) dataset, MultiNLI (Williams et al., 2018). An original NLI example contains a premise and a hypothesis, and a label indicating whether the premise is entailed or contradicted by the hypothesis. We treat premises in NLI examples as conditions and hypotheses as facts provided in user scenarios. 4Some tasks have an additional class “irrelevant” because some questions in the dataset are not relevant to the provided passages, i.e. Ir ∈ {0, 1}3. not “Has two children”, “Has not applied before.” ] then “Waive the application fees”. Question: Is “Eligible for $60 a week” correct? Scenario: [“65 years old”, “Rejected last year”] Answer: Yes, [“Employed for two years”] Table 1: An example in CondNLI. The answer is “Yes” with unsatisfied conditions [“Employed for two years”]. An example is shown in Table 1. The example contains four conditions, among which “Aged 59 1/2 or older” and “Employed for two years” belong to a condition group under a logical reasoning type “all”, indicating that both conditions have to be satisfied in order to “Get at least $60 a week”. The answer statement, e.g. “Get at least $60 a week”, also comes from NLI examples. We treat the premise of an NLI example as an answer statement and the corresponding hypothesis as the question, e.g. is “Eligible for $60 a week” correct? In addition to the condition group and the answer statement that is relevant to the question, we add a few more condition groups as distractors to make the constructed dataset more challenging. Please see Appendix A for more information in dataset construction. Baselines Previous work (Clark et al., 2020b) showed that pretrained Transformer-based Language Models, e.g. RoBERTa (Liu et al., 2019), have the ability to reason over multiple conditions to answer a reasoning question in the deductive reasoning setting, e.g. “if A and B then C” with facts on both conditions A and B provided. However, examples in CondNLI are usually longer and won’t fit into RoBERTa’s memory. Equivalently, we experiment with two other language models, T5 (Raffel et al., 2019) (with the FiD strategy (Izacard & Grave, 2020) to adapt to longer input) and ETC (Ainslie et al., 2020), on the CondNLI dataset.5 In ETC, we use the global tokens to predict unsatisfied conditions. In T5, To simplify the generation task, we assign an id to each condition and let FiD generate unsatisfied condition ids. We also compare T-Reasoner with T5 on inputs that contains more conditions to test their generalization ability. Results The experiment results are shown in Table 2. We measure both the accuracy of label prediction and the F1 of unsatisfied conditions. The results show that T-Reasoner performs significantly better than pretrained LMs, T5 and ETC, in both predicting correct answers (Ans) and unsatisfied conditions (Conds) on CondNLI. We additionally test T-Reasoner’s ability in generalizing to more conditions. We train TReasoner on templates with 6 conditions or fewer and test it on the examples with more than 6 conditions. Figure 3 (Left) shows the change of performance in both label classification and unsatisfied condition prediction tasks as the number of conditions increase. We observe some decrease in performance in both tasks, but it is still reasonable with 20 conditions. Furthermore, we experiment with different numbers of layers in the reasoning module (Right). The Transformer-based reasoning module needs at least 3 layers for the reasoning task, especially for predicting unsatisfied conditions. 4.2 SHARC Dataset In the second experiment, we run T-Reasoner on a real scenario-based QA dataset, ShARC (Saeidi et al., 2018), that has complex passages and many conditions. An example in ShARC contains 5Examples in CondNLI exceeds the limit of 512 tokens in RoBERTa. Decision Question (micro / macro) (BLEU1 / 4) CM 61.9 / 68.9 54.4 / 34.4 BERTQA 63.6 / 70.8 46.2 / 36.3 UcraNet 65.1 / 71.2 60.5 / 46.1 Bison 66.9 / 71.6 58.8 / 44.3 E3 67.7 / 73.3 54.1 / 38.7 EMT 69.1 / 74.6 63.9 / 49.5 DISCERN 73.2 / 78.3 64.0 / 49.1 DGM 77.4 / 81.2 63.3 / 48.4 T-Reasoner 80.4 / 83.9 71.5 / 58.0 Table 4: Experimental results on the ShARC dataset. Numbers for the baseline models (Saeidi et al., 2018; Zhong & Zettlemoyer, 2019; Verma et al., 2020; Lawrence et al., 2019; Gao et al., 2020a;b; Ouyang et al., 2020) are borrowed from Ouyang et al. (2020). Decision Question Condition (micro / macro) (BLEU1 / 4) (F1) T5 63.7 / 68.2 57.3 / 48.2 44.0 DISCERN 74.9 / 79.8 65.7 / 52.4 55.3 DGM 78.6 / 82.2 71.8 / 60.2 57.8 T-Reasoner 79.8 / 83.5 71.7 / 60.4 69.2 a passage, a user question, and a user scenario which is expressed in a conversation history between a user and a machine. A model is expected to find an answer to the user’s question, or raise a clarification question for the unsatisfied conditions. Answers in this dataset are restricted to one of the following labels: “yes”, “no”, “inquire”, and “irrelevant”. The first three labels are equivalent to “satisfied”, “contradicted”, and “partially satisfied”. “irrelvant” is a new label that should be predicted if the conversation history and the question are irrelevant to the provided passage. This task of predicting answers is called “Decision Making” in their original ShARC paper (Saeidi et al., 2018) and evaluated as micro and macro accuracy. In addition to the “Decision Making” task, they consider another task “Question Generation” which is equivalent to predicting unsatisfied condition in T-Reasoner,6 evaluated with BLEU 1 and BLEU 4 scores. Compared to CondNLI, where conditions and their relationship are clearly mentioned in the context, conditions are embedded in the context in ShARC examples, e.g. Figure 1. Please see Appendix C for more information in data preparation. Baselines and Results We compare T-Reasoner to several strong baseline models, including the previous state-of-the-art models, DISCERN (Gao et al., 2020b) and DGM (Ouyang et al., 2020). Different from the baseline models, which use pipeline systems to separately predict answer labels and unsatisfied conditions, T-Reasoner performs the two tasks jointly. The results are shown in Table 4. T-Reasoner outperforms the previous baselines by 3 points on the “Decision Making” task and more than 8 points on the “Question Generation” task. T-Reasoner also significantly outperforms other baseline models (Saeidi et al., 2018; Zhong & Zettlemoyer, 2019; Verma et al., 2020; Lawrence et al., 2019; Gao et al., 2020a;b; Ouyang et al., 2020). Ablation: Condition Accuracy One problem with the ShARC Question Generation task is that only one of the unsatisfied conditions is annotated, even though multiple unsatisfied conditions exist. To further evaluate T-Reasoner’s performance in predicting all unsatisfied conditions, we manually annotate the logical operations in 20 contexts that have more than one condition (857 data total),7 and use the annotated logical operations to find all unsatisfied conditions. We report the F1 of the predicted unsatisfied conditions (see Table 5). Compared to the baselines (Gao et al., 2020b; Ouyang et al., 2020), T-Reasoner improves the F1 by 11.4 points. Ablation: Label Accuracy v.s. Conditions We additionally measure the accuracy versus the number of conditions in the context. Results in Table 6 show that the improvement in T-Reasoner’s performance over the previous state-of-the-art model (DGM) mostly comes from questions that have more than one condition. 4.3 CONDITIONALQA Dataset In the third experiment, we run T-Reasoner on ConditionalQA (Sun et al., 2021a), which contains longer context (documents), more conditions and more complex relationship between 6Unsatisfied conditions are then paraphrased into questions, e.g. “Aged 59 1/2 or older” is paraphrased to “Are you aged 59 1/2 or older?” 7Each context in ShARC has 32.9 data on average. conditions. Furthermore, the ConditionalQA dataset contains a mixture of “Yes”/“No” questions and questions with free-form answers. Please see Appendix B for details on data preparation. Evaluation As introduced in ConditionalQA (Sun et al., 2021a), predictions are evaluated in two sets of metrics: EM/F1 and conditional EM/F1. EM/F1 are the traditional metrics that measures the accuracy of predicted answer spans. Conditional EM/F1 is a novel metric introduced by Sun et al. (2021a), that jointly measures the accuracy of answer spans and unsatisfied conditions. Please refer to the ConditionalQA paper (Sun et al., 2021a) for more information. Briefly, the conditional EM/F1 is the product of the original answer EM/F1 and the F1 of the predicted unsatisfied conditions. The conditional EM/F1 is 1.0 if and only if the predicted answer span is correct and all unsatisfied conditions are found. If there’s no unsatisfied condition, the model should predict an empty set. Baselines and Results We compare T-Reasoner with several strong baselines, including ETC (in a pipeline) (Ainslie et al., 2020), DocHopper (Sun et al., 2021b), and T5 (with FiD) (Izacard & Grave, 2020). The ETC pipeline first extracts possible answers from the context and then predict unsatisfied conditions independently. DocHopper is a multi-hop retrieval system that iteratively retrieves evidence which contains answers and unsatisfied conditions. T5 (w/ FiD) is a encoderdecoder model. We train T5 (w/ FiD) to generate answers followed by a list of unsatisfied conditions ids. The experimental results are presented in Table 7. T-Reasoner significantly outperforms the baselines in predicting answers and jointly predicting answers and unsatisfied conditions – a relative improvement of 148% (Conditional) and 27.8% (Overall) in conditional F1 (F1 w/ conds). Ablation: Condition Accuracy Since there’s not a metric that only measures the quality of predicted conditions, we additionally report the F1 of the predicted unsatisfied conditions (Table 2). The best baseline models, T5 (w/ FiD), rarely predicts any conditions. Even though we train T5 (w/ FiD) only on the subset of questions that have conditional answers to force it predict unsatisfied conditions, its performance slightly improves but is still much lower than T-Reasoner by 16.5 points in condition F1. 5 CONCLUSION We study the problem of scenario-based QA in which questions are accompanied by incomplete scenarios and models are asked to find answers that are consistent with the provided user scenario. Models are further asked to identify unsatisfied conditions that are necessary for the predicted answers. We propose a system, T-Reasoner, that contains an entailment module to check whether a condition has been satisfied and a jointly trained reasoning module to verify the status of condition groups and predict unsatisfied conditions. T-Reasoner shows excellent reasoning ability, and can easily generalize to more conditions on a synthetic dataset CondNLI. Furthermore, T-Reasoner achieves state-of-the-art performance on two challenging scenario-based QA datasets ShARC (Saeidi et al., 2018) and ConditionalQA (Sun et al., 2021a). 6 ETHICAL STATEMENT Experiments in this paper are performed on publicly available datasets for academic research purposes. No real user data is used in the experiments. Even though the proposed model could be applied to many real world problems to help users answer their questions, the accuracy of the proposed work is still limited and predictions may be misleading. Please carefully evaluate the performance before applying it to real problems. 7 REPRODUCIBILITY STATEMENT Datasets and codes will be released upon the acceptance of this paper, including all scripts for constructing the proposed synthetic dataset and the preprocessing script for the two benchmark QA datasets. Models are trained on public available data. All results are reproducible. A CONDNLI DATASET CONSTRUCTION We first construct templates for the CondNLI examples and then instantiate the variables in the template with real NLI examples. Construct Templates We use capital letter A, B, . . . to represent conditions and lower-cased letters a, b, . . . to represent the corresponding facts. We use another few letters X , Y , . . . to represent the statements of conclusion, and lower-cased letters x, y, . . . to represent questions. Conditions are grouped together under a logical operator that specifies the relationship between conditions. For example, a logical operator “all” specifies that all conditions in the group must be satisfied in order to make the condition group satisfied. Here, we consider four types of logical operations to construct this synthetic dataset: • “all”: all conditions under this logical type should be satisfied in order to make the answer true. • “any”: only requires one of the conditions under the logical type “any” to be satisfied. It doesn’t matter whether other conditions have been satisfied, contradicted, or not mentioned in the question. • “required”: This is a special case of “all” / “any” when there is only one condition. Conditions with the logical type “required” must be satisfied. • “optional”: Conditions have the type “optional” if they are not relevant to the question. We pair a condition group with a conclusion statement and get a logical statement “If all (A, B), then X”. To challenge models’ ability in identifying relevant conditions from context, we add a few distracting statement that leads to different conclusions, e.g. “If all (not C, D), then Y ”. An example of a context template is shown in Table 9. Facts are constructed by randomly sampling a subset from all possible facts {a, b, . . . }. A question is sampled from possible questions {x, y, . . . }. We then compute the answer (and unsatisfied conditions if any) from the context, facts, and the question. Generate Examples For a templates with variables A, B, X , Y , . . . , a, b, x, y, . . . , we instantiate the variables with NLI examples to get the real data. We use the premises of original NLI examples for conditions and conclusions, i.e. capital letter variables, and the hypothesis for question and facts, i.e. lower-case variables. Note that sampling requires matching the entailment state of conditions, e.g. “not d” requires sampling from NLI examples that are labeled as “contradict”. We restrict the number of conditions in the context to 6 and randomly generate 65 distinct templates.8 During training, we randomly pick a template and instantiate it with NLI examples to generate real training examples. This random generation process enables creating (almost) unlimited amount of training data. We randomly generate another 5000 examples for development and testing. Quality Assessment Training and validation data in CondNLI are generated from NLI examples in the training and validation split of the MNLI dataset, respectively. This ensures that NLI examples used in validation are not exposed at training time. We control the generation process to ensure that the automatically generated data are balanced in terms of answer labels, logical types of interacting conditions, and number of conditions included in scenarios. Results are shown in Table 10. We additionally require scenarios must have at least 4 conditions to avoid overly simple examples. We additionally measure the Jaccard distance between premises and hypotheses of the NLI examples used in constructing the CondNLI dataset. The token-level Jaccard distance is 27.2. Even though token-level overlap exists, a model still needs to understand the semantic relationship between premises and hypotheses to predict their entailment status. B CONDITIONALQA EXPERIMENT DETAILS An example in the ConditionalQA dataset provides a parsed web page as context. It also provides a question, and a user scenario that is relevant to the context. We prepend the user scenario to the question as input to the model. The context in ConditionalQA is provided as a list of HTML elements. We treat each element at the leaf of the DOM tree as a condition ci, and prepend all its parents (from its direct parent to the root) to get an expanded condition si. Since we need the decoding module to generate answer spans, we initialize the model with T5, i.e. we use parameters from the encoder to initialize the entailment module, and use decoder to initialize the decoding module. The reasoning module is randomly initialized. C SHARC EXPERIMENT DETAILS Different from ConditionalQA, where each sentence in the context is treated as a condition, conditions in the ShARC dataset are shorter and are sometimes short phrases (sub-sentence). For example, the context “If you are a female Vietnam Veteran with a child who has a birth defect, you are eligible for ...” contains two conditions, “If you are a female Vietnam Veteran” and “with a child who has a birth defect”.9 In order to handle sub-sentence conditions, we follow the strategy proposed in two 8Restricting the number of conditions is only for the purpose of reducing training complexity. The experiment in Figure 3 (left) shows the model’s capability of generalizing to more conditions. 9It is arguable that this could be generally treated as one condition, but it is treated as two conditions with the logical operator “all” in the ShARC dataset. of the baseline models, DISCERN Gao et al. (2020b) and DGM Ouyang et al. (2020), that split a sentence into EDUs (Elementary Discourse Units) using a pretrained discourse segmentation model Li et al. (2018). The discourse segmentation model returns a list of sub-sentences, each considered as a condition. While we could treat each condition independently as we did previously for other datasets, the segmented EDUs are different in that they are not full sentences and may not retain their semantic meaning. Thus, we consider using the full context (usually less than 512 tokens) as the contextual information for condition ci, i.e. the expanded condition si includes the full context, but the condition ci is highlighted using the special tokens <CDT> and <\CDT>. We do not need the decoding module for the ShARC dataset, so we can safely discard it. We initialize the entailment module with ELECTRA (Clark et al., 2020a). The previous state-of-the-art baselines (Ouyang et al., 2020; Gao et al., 2020b) use ELECTRA to initialize their model. We use the same pretrained checkpoint to make a fair comparison. For the question generation task, we use the same input s as in decision making, except that we replace the prefix “condition:” with “unsatisfied condition:” for “unsatisfied” conditions. We fine-tune a T5 model for question generation. D DATASET STATISTICS Dataset statistics are shown in Table 11.
1. What is the focus and contribution of the paper on question answering? 2. What are the strengths of the proposed approach, particularly in terms of its ability to reason and infer conditions? 3. What are the weaknesses of the paper, especially regarding the experimental section? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper looks at QA when questions are asked in a given scenario, and they can be answered only if providing the model information about the scenario. In addition, such questions require a high level of reasoning, and thus the model should also be able to infer how conditions interact with each other and find correct answers that possibly satisfy all the conditions. To deal with these challenges, the paper introduces a new dataset derived from an existing one (i.e., MultiNLI), and the T-reasoner that contains an entailment module to check if conditions are satisfied by the scenario, a decoding module to identify eligible answers, and finally a reasoning module. Results on the synthetic dataset proposed in this paper show that it outperforms other SOTA models. Strengths And Weaknesses Strength: The paper models a relevant scenario and shows some interesting advancements in an important direction for QA systems. T-reasoner is a good attempt to jointly model the full process. Weaknesses: Limitations about the synthetic dataset are missing (see my comments below) Clarity, Quality, Novelty And Reproducibility The paper is mostly understandable, and the results support the idea of the paper. Code and data will be available upon acceptance.
ICLR
Title Scenario-based Question Answering with Interacting Contextual Properties Abstract In the scenario-based Question Answering (QA) task, models are asked to find answers that are appropriate to the user scenarios associated with the question and identify information that is missing from the scenarios but is necessary for the answers to hold. Scenarios commonly include multiple properties of users, such as age, employment status, and income level for the question “How much can I claim from this benefit”. The properties relevant to a potential answer are given in a document, which will state conditions necessary for the answer to hold. Documents also may specify how conditions interact with each other, e.g. with text like “one of the conditions below must apply”. Although understanding the relationship between conditions is crucial for solving this challenging QA task, limited work has been done so far in modeling this. In this paper, we propose the T-Reasoner model, which solves this problem with three jointly learned modules: an entailment module which checks whether a condition has been satisfied by the scenario, a decoding module which locates eligible answers from documents, and a reasoning module which infers the relationship between conditions and performs a reasoning step to determine the logically consistent answers and identify missing conditions. T-Reasoner outperforms strong baselines on a synthetic scenariobased QA dataset and achieves a new state-of-the-art on two scenario-based QA benchmarks, outperforming the prior best models by 3-10 points. 1 1 INTRODUCTION Many questions can only be answered correctly after some context for the question is supplied or inferred: e.g., “When is the next LA Lakers home game” needs temporal context, and “Where is the closest pizza place” needs geographical context. Prior work on contextual QA (Zhang & Choi, 2021; Dhingra et al., 2021; Kasai et al., 2022; Chen et al., 2021) has focused on tasks in which context is important, but limited: generally a small number of properties of the user that posed the question need be considered (e.g., location and time). However, many important questions depend on many more properties of the user. In this paper we consider scenario-based QA, in which questions are augmented with a textual “scenario” that describes some properties of the user. For example, in Figure 1 a user has posed a question “how much support am I eligible for?” , and the answer depends on multiple user properties (namely, their relationship with deceased, and whether they or other relatives have claimed other benefits.) Having multiple contextual properties means these properties can interact. For example, in Figure 1 the answer depends on a conjunction of conditions (e.g. “if both” in Scenario 1) and also a disjunction of conditions (e.g. either being a “relative” or a “close friend” in Scenario 2). In our benchmarks, scenarios are informative but not complete, so the goal of the system is to identify possible answers—i.e., answers that are logically consistent with the scenario—as well as any conditions that necessary for the answer to hold which are not entailed by the scenario. For example, in Figure 1 Scenario 1, the system should provide the answer “up to $1200” but must also note that the condition “you didn’t claim other benefits” is required by the answer, and not entailed by the scenario. We refer to such conditions as unsatisfied conditions. This task is challenging because in addition to finding eligible answers from documents, it also requires models to perform two non-trivial reasoning tasks. First, it must understand the document well enough to understand conditions given as 1Codes and data are available at https://github.com/haitian-sun/T-Reasoner. context for the answer (each property that may affect the answer is considered as a condition), and the logical relationship between these conditions. For example, in Figure 1 Scenario 1, it requires both “the partner of the deceased...” and “you didn’t claim other benefits” to be satisfied (i.e. conjunction), while it requires either a “relative” or “close friend” (i.e. disjunction) in Scenario 2. Second, a model must identify which conditions are entailed by information provided in user scenarios, which are contradicted, and which are not mentioned but are required to support an eligible answer. Previous work by Clark et al. (2020b) has shown that pretrained Language Models (LMs), e.g. RoBERTa (Liu et al., 2019), can be finetuned to perform a similar reasoning task over hypothetical statements, i.e. “if A and B then C”. However, conditions used in their experiments are over simplified and sometimes semantically incorrect, e.g. A = “Mike is strong” and B = “Cindy is green”. Furthermore, languages used to described the relationship between conditions are easy, and the number of conditions involved in the reasoning process is small. All factors above make the proposed task easy for existing models (Liu et al., 2019; Raffel et al., 2019), but under-represents the challenges exists in real problems that require reasoning with logically interacting conditions. Furthermore, previous work (Clark et al., 2020b) makes an assumption that every conditions must be either satisfied or contradicted by the evidence provided in questions. As a result, no “unsatisfied condition” is required in predictions. We do not make such assumption, but instead only provide evidences for a subset of conditions, and ask models to predict a logically consistent answer and identify conditions that are required but not yet satisfied, i.e. unsatisfied conditions. Indeed, experiments (Sun et al., 2021a) show that pretrained language models (LMs), e.g. T5 (Raffel et al., 2019), struggle to predict unsatisfied conditions. Even though an additional module is specifically trained to predict unsatisfied conditions (Gao et al., 2020b; Ouyang et al., 2020), their performance is still limited. We propose a simple yet effective model, T-Reasoner, which models the relationship between conditions and performs the reasoning task to verify answers that are consistent with user scenarios and identify conditions that are unsatisfied. T-Reasoner contains three main modules, an entailment module, a reasoning module, and a decoding module, which are jointly trained. The entailment module predicts whether conditions have been entailed or contradicted by users’ scenarios. The reasoning module infers the relationship between conditions then performs a reasoning step to decide whether the provided information in user scenarios is sufficient and to identify unsatisfied conditions otherwise. If the answer is a free-form text span, T-Reasoner additionally uses a generation module to predict the answer span. T-Reasoner shows excellent reasoning ability on a synthetic dataset and outperforms the previous state-of-the-art models on two Question Answering (QA) datasets, ConditionalQA and ShARC (Sun et al., 2021a; Saeidi et al., 2018), improving the state-of-the-art by 3-10 points on answer and unsatisfied condition prediction tasks. 2 RELATED WORK The task proposed by Clark et al. (2020b) is commonly referred to as deductive reasoning where all information required to find a definite answer is provided. Other models have been developed for deductive reasoning with symbolic rules (Cohen, 2016; Cohen et al., 2020; Sun et al., 2020; Ren et al., 2020; Ren & Leskovec, 2020). Embedding-based methods (Sun et al., 2020; Ren et al., 2020; Ren & Leskovec, 2020) first convert symbolic facts and rules to embeddings and then apply neural network layers on top to softly predict answers. These models differ from our work in that the symbolic structure of the rules is typically known, whereas in our model it is implicit in a document. Other recent work in deductive reasoning focused on tasks where rules and facts are expressed in natural language (Talmor et al., 2020; Saeed et al., 2021; Clark et al., 2020b; Kassner et al., 2020). Such tasks are more challenging because the model has to first understand the logic described in the natural language sentences before performing logical reasoning. Many of these models rely on rules that are produced by templates, or templated rules that have been paraphrased by crowd workers. In our work, the logical interactions analogous to these rules are implicit in real-world documents. Different from most reasoning tasks, the task considered in this paper provides a list of conditions that, if true, would support an answer. Identifying such conditions is usually called abductive reasoning, as opposed to deductive reasoning. Very limited work has explored abductive reasoning for QA. Previous work (Gao et al., 2020a;b; Ouyang et al., 2020) on the ShARC (Saeidi et al., 2018) dataset propose to solve this problem by predicting a special label “inquire” if there was not enough information to make a definite prediction. Specifically, EMT and DISCERN (Gao et al., 2020a;b) computed an entailment vector for each condition and performed a weighted sum of those vectors to predict the final answer. DGM (Ouyang et al., 2020) additionally introduced a GCN-based model to better represent the entailment vectors. Even though these models were able to predict the answer labels as “inquire” when there were unsatisfied conditions, none of them predict which conditions needed to be further satisfied, unlike our model. Our model is also more scalable than these, as it does not require concatenating a full context and a question. 3 MODEL 3.1 TASK: QA WITH CONDITIONS The scenario-based QA task requires models to find answers that are logically consistent with the provided user scenarios which are potentially incomplete. In this paper, we consider this task in the reading comprehension (RC) setting in which a passage that contains relevant information about the question is provided. We leave the open-domain setting of this problem for future work. Specifically, a model takes a question, a scenario, and a passage that contains answers and conditions as input and predicts logically consistent answers and their unsatisfied conditions. Let’s consider a passage that contains a set of conditions C = {c1, . . . , cn} and the set of eligible answers for a question under all possible combinations of conditions A = {a1, . . . , am}. Each answer ai ∈ A is restricted by a subset of conditions Ci ⊆ C. Conditions in Ci interact with each other under relationship Ri (Ri is an abstract set which will not be explicitly expressed). A condition group, Gi = (Ci, Ri) is a pair of Ci and Ri, which describes in what scenario the answer ai is correct. Note that the list of answers A, sets of conditions Ci’s and their relationship Ri’s are not explicitly provided in training and testing examples – models have to generate them from the passage. We say that a condition group Gi is satisfied if its underlying logical statement that consists of Ci and Ri has been satisfied by the scenario, for example, in Scenario 2 in Figure 1 where the condition group for “up to $800” has been satisfied. Besides being satisfied, a condition group Gi has two more possible outcomes: (1) G is partially satisfied if some of the conditions have been satisfied but there is still some information missing so the answer is not fully supported, e.g. the condition group of the answer “up to $1200” in Scenario 1 (Figure 1), and (2) Gi is contradicted if one or more conditions in the group are contradicted which causes the answer ineligible, e.g. the condition group of the answer “up to $1200” in Scenario 2 (Figure 1). An answer ai is logically consistent with the scenario if the underlying condition group Gi is satisfied or partially satisfied. We denote the set of logically consistent answers à ⊆ A. The set à contains zero or more answers – the set à is empty if none of the answers in A is logically consistent with the user scenario. A model should predict an answer from à if à is not empty, and mark the question as not answerable, otherwise.2 In addition to predicting logically consistent answers, we also perform the task of finding unsatisfied conditions C̃i. The set C̃i should be concise, i.e. it should only include the conditions that are necessary. For example, the condition “have worked for more than 4 years” is not an unsatisfied condition because whether it has been satisfied or not won’t affect the output of the condition group. In summary, we evaluate a model’s prediction of a logically consistent answer ai ∈ à and the set of unsatisfied conditions C̃i for answer ai, i.e. (ai, C̃i). Answers and unsatisfied conditions in the output are jointly evaluated.3 This task specifically challenges models’ ability in understanding the relationship between conditions and performing logical reasoning process accordingly. We will introduce a simple and effective model, T-Reasoner, to tackle this challenging reasoning task. 3.2 MODEL In this section, we will discuss T-Reasoner which consists of an entailment module, a reasoning module, and optionally a decoding module, to perform this challenging QA task in embedding space. Input The model, T-Reasoner, takes a question q with scenario e and a passage p as inputs and predicts an answer ai that is logically consistent with the user scenario and a list of unsatisfied conditions C̃i. Since the list of all conditions C for the question are not provided in the example, we chunk the passage p into pieces and consider each piece of text as a condition ci. Conditions obtained this way may be irrelevant to questions. We rely on the entailment module (see next) to decide whether a condition ci is relevant and what is its relationship with others. The chunking strategy may be different for different datasets. Please see §4.2 and §4.3 for more information. Briefly, passages are usually chunked into sentences, short passages with 2-3 sentences, or sub-sentences (text phrases). Entailment Module We apply an entailment module to check whether each condition ci ∈ C have been entailed by the user scenario. Each condition ci is checked independently, as opposed to concatenating all conditions into a long input and checking them all at once. This strategy significantly reduce the computation cost compared to checking all conditions at once, especially if context is long, e.g. legal documents which are tens or hundreds of pages long (see examples in 4.3). Specifically, 2An oracle model should be able to predict all answers from Ã. We consider a slightly simplified setting in this paper in which a model is only required to predict one of the answers. In our experiments, the ShARC (Saeidi et al., 2018) dataset only contains questions that have a single answer, i.e. |Ã| = 1. The ConditionalQA (Sun et al., 2021a) dataset contains questions that have multiple answers, |Ã| > 1, so the performance will be sacrificed. We leave the task of predicting all logically consistent answers as future work. 3Evaluation metrics are different in different datasets Sun et al. (2021a); Saeidi et al. (2018). Please refer to §4.3 and 4.2 for more details. the computation complexity of our approach is O(|C|) where |C| is the number of total conditions, compared to a complexity of O(|C|2) otherwise. This independent checking strategy, however, separates each condition from its context and thus causes a lost of contextual information for conditions ci and eventually negatively impacts the model’s performance. Thus, we extend a condition ci by adding tokens from its surroundings. For example, the condition “the partner of the deceased when they died” is expanded to “... up to $1200 may be awarded if both: <CDT> the partner of the deceased when they died <\CDT> you didn’t claim ...”, where <CDT> and <\CDT> are two special tokens that mark the beginning and end of the condition ci. Apart from making a condition ci more fluent and coherent, the added contextual tokens also make it easier to understand the relationship between the current condition ci and other conditions in its neighbours. We may additionally add page titles, section titles, prompts of list items, or table headings, etc., if applicable to the expanded conditions. Please see §4.3 and 4.2 for more details. We denote conditions with extended contextual information as si for condition ci. We learn a Transformer model for the entailment module which takes an expanded condition si and the question q and scenario e as input, and returns a list of vectors si,hi,1, . . . ,hi,m. The first vector si is a summarization vector which includes several aspects of information: (1) whether the underlying condition ci has been satisfied, contradicted, or not mentioned by the user scenario, (2) whether the condition ci is relevant to the question, and (3) if relevant, what is its relationship with other conditions in its neighbours. These information will be used for reasoning in the future layers. Embeddings hi,1, . . . ,hi,m are token embeddings that will used for decoding if needed. Please see the description of the reasoning module for more information. si,hi,1, . . . ,hi,m = Entail(si, e, q) (1) One may consider supervising this entailment module by adding classification layers on si to explicitly predict the entailment status of condition ci and its relationship with other conditions. However, obtaining supervision labels for these auxiliary tasks can be challenging as they are often not provided in the example. Fortunately, we show that our proposed model, T-Reasoner, can be trained end-to-end, without such intermediate supervision. Decoding Module The decoding module generates an eligible answer âi which is potentially logically consistent to the question. The generated answer âi will not be returned until the status of its condition group Ĝi is verified by the reasoning module (discussed below). The decoding module is analogous to FiD (Izacard & Grave, 2020), i.e. token embeddings from different conditions (which are encoded separately) are concatenated for decoding. Different from Izacard & Grave (2020) which was applied to independently retrieved passages for open-domain QA, the decoding module in T-Reasoner is used on coherent content, i.e. conditions from the same passage. The contextual information in the expanded condition si helps connect conditions that are separately encoded. The decoding module takes token embeddings for all conditions h1,1, . . . ,hn,m computed from Eq. 1 to generate answer spans. The generation task is trained with teacher forcing. We do not write out the teacher forcing decoding loss ldecode here. Please refer to the T5 paper (Raffel et al., 2019) for more information. If questions have multiple logically consistent answers, i.e. à > 1, we randomly select an answer ai ∈ à as the label to train the decoding module. âi = Decode(h (1) 1,1, . . . ,h (n) kn,m ) (2) We consider two different types of answers: “Yes”/“No” or free-form answers. In the first case, we simply let the model generate a special a special token [YESNO] and consider the reasoning result from the reasoning module (see next) as the answer, i.e. the answer is “Yes” if the condition group is satisfied (or partially satisfied) or “No” if contradicted. Since some datasets only contain “Yes”/“No” questions, we can then safely discard the decoding module for these datasets. In the second case, i.e. answers are free-form text spans, we will return generated spans as answers only if their condition groups have been verified as satisfied or partially satisfied by the reasoning module. If the condition group is contradicted, we will mark the question as not answerable. Reasoning Module The reasoning module combines the local relationship between conditions from their embeddings s1, . . . , sn and performs a logical reasoning process to decide the reasoning result for a condition group Gi for the generated answer âi and to identify unsatisfied conditions C̃i. The input to the reasoning module is a list of vectors, s1, . . . , sn for conditions c1, . . . , cn, that are output by the entailment module (Eq. 1). We use another Transformer model as our reasoner, because Transformers have the self attention mechanism which allows conditions {s1, . . . , sn} to attend to each other, so the reasoning result of a condition group can be summarized. This is crucial because, for example, if one of the conditions in a disjunction group is satisfied, the condition group will be automatically satisfied regardless the status of other conditions in the same group. We prepend a trainable vector s0 to the list of condition embeddings to summarize the reasoning result. The output vectors ŝ0, ŝ1, . . . , ŝn will be used to predict the status of the condition group and the unsatisfied conditions for the generated answer. The first vector ŝ0 will be used to predict the reasoning result of the condition group. If the condition group is partially satisfied, we use the rest of vectors, ŝ1, . . . , ŝn, to identify unsatisfied conditions. We compute loss on both reasoning and unsatisfied condition predictions. Let Ir and Ic be the one-hot labels for the two tasks. ŝ0, ŝ1, . . . , ŝn = Reason(s0, s1, . . . , sn) lreason = softmax_cross_entropy(WTl ŝ0, Ir) lcond = softmax_cross_entropy(WTc ŝi, Ic) As discussed above (§3.1), the reasoning results of condition groups have three possible outcomes: “satisfied”, “partially satisfied”, and “contradicted”. We merge the first two into one label “satisfied”, and differentiate them by whether unsatisfied conditions exist, i.e. r ∈ {satisfied, contradicted} and its one-hot label Ir ∈ {0, 1}2.4 Labels for conditions are “entailed”, “contradicted”, “not mentioned”, “implied”, and “unsatisfied”, i.e. Ic ∈ {0, 1}5. The first three labels are as they are named. The label “implied” means a condition is implied by other conditions in the condition group. For example, if one of the conditions in a disjunction group has been satisfied, the rest of conditions are “implied”. The class “unsatisfied” means it is an unsatisfied condition which must be returned together with the predicted answer. The labels may not apply to all datasets, e.g. ConditionalQA (Sun et al., 2021a) only annotates two labels “unsatisfied” vs. others, we will make changes to the loss function accordingly. Loss Function We jointly train the entailment module and reasoning module. The final loss function is the sum of the answer loss lreason and the condition entailment loss lcond. If the answers contain text spans, we jointly train the decoding module ldecode as well. l = lreason + lcond l = lreason + lcond + ldecode 3.3 FINETUNE PRETRAINED CHECKPOINTS The entailment module and decoding module (if adopted) load pretrained LM checkpoints, e.g. T5 (Raffel et al., 2019) and BART (Lewis et al., 2019). The pretrained parameters are loaded for the entailment module and then finetuned for downstream tasks. The reasoning module is randomly initialized and jointly trained with other modules. The number of Transformer layers in the reasoning module is a hyper-parameter. We choose the number of layers l = 3 or l = 4. Please see §4.1 for ablation study on the number of Transformer layers for the reasoning task. The decoding module is also finetuned. If a decoding module is needed, we will initialize the entailment and decoding module from the same pretrained checkpoint. 4 EXPERIMENTS We experiment with T-Reasoner on a synthetic dataset, CondNLI, and two benchmark QA datasets, ConditionalQA (Sun et al., 2021a) and ShARC (Saeidi et al., 2018), for scenario-based QA task. 4.1 CONDNLI Dataset The synthetic CondNLI dataset is derived from an existing Natural Language Inference (NLI) dataset, MultiNLI (Williams et al., 2018). An original NLI example contains a premise and a hypothesis, and a label indicating whether the premise is entailed or contradicted by the hypothesis. We treat premises in NLI examples as conditions and hypotheses as facts provided in user scenarios. 4Some tasks have an additional class “irrelevant” because some questions in the dataset are not relevant to the provided passages, i.e. Ir ∈ {0, 1}3. not “Has two children”, “Has not applied before.” ] then “Waive the application fees”. Question: Is “Eligible for $60 a week” correct? Scenario: [“65 years old”, “Rejected last year”] Answer: Yes, [“Employed for two years”] Table 1: An example in CondNLI. The answer is “Yes” with unsatisfied conditions [“Employed for two years”]. An example is shown in Table 1. The example contains four conditions, among which “Aged 59 1/2 or older” and “Employed for two years” belong to a condition group under a logical reasoning type “all”, indicating that both conditions have to be satisfied in order to “Get at least $60 a week”. The answer statement, e.g. “Get at least $60 a week”, also comes from NLI examples. We treat the premise of an NLI example as an answer statement and the corresponding hypothesis as the question, e.g. is “Eligible for $60 a week” correct? In addition to the condition group and the answer statement that is relevant to the question, we add a few more condition groups as distractors to make the constructed dataset more challenging. Please see Appendix A for more information in dataset construction. Baselines Previous work (Clark et al., 2020b) showed that pretrained Transformer-based Language Models, e.g. RoBERTa (Liu et al., 2019), have the ability to reason over multiple conditions to answer a reasoning question in the deductive reasoning setting, e.g. “if A and B then C” with facts on both conditions A and B provided. However, examples in CondNLI are usually longer and won’t fit into RoBERTa’s memory. Equivalently, we experiment with two other language models, T5 (Raffel et al., 2019) (with the FiD strategy (Izacard & Grave, 2020) to adapt to longer input) and ETC (Ainslie et al., 2020), on the CondNLI dataset.5 In ETC, we use the global tokens to predict unsatisfied conditions. In T5, To simplify the generation task, we assign an id to each condition and let FiD generate unsatisfied condition ids. We also compare T-Reasoner with T5 on inputs that contains more conditions to test their generalization ability. Results The experiment results are shown in Table 2. We measure both the accuracy of label prediction and the F1 of unsatisfied conditions. The results show that T-Reasoner performs significantly better than pretrained LMs, T5 and ETC, in both predicting correct answers (Ans) and unsatisfied conditions (Conds) on CondNLI. We additionally test T-Reasoner’s ability in generalizing to more conditions. We train TReasoner on templates with 6 conditions or fewer and test it on the examples with more than 6 conditions. Figure 3 (Left) shows the change of performance in both label classification and unsatisfied condition prediction tasks as the number of conditions increase. We observe some decrease in performance in both tasks, but it is still reasonable with 20 conditions. Furthermore, we experiment with different numbers of layers in the reasoning module (Right). The Transformer-based reasoning module needs at least 3 layers for the reasoning task, especially for predicting unsatisfied conditions. 4.2 SHARC Dataset In the second experiment, we run T-Reasoner on a real scenario-based QA dataset, ShARC (Saeidi et al., 2018), that has complex passages and many conditions. An example in ShARC contains 5Examples in CondNLI exceeds the limit of 512 tokens in RoBERTa. Decision Question (micro / macro) (BLEU1 / 4) CM 61.9 / 68.9 54.4 / 34.4 BERTQA 63.6 / 70.8 46.2 / 36.3 UcraNet 65.1 / 71.2 60.5 / 46.1 Bison 66.9 / 71.6 58.8 / 44.3 E3 67.7 / 73.3 54.1 / 38.7 EMT 69.1 / 74.6 63.9 / 49.5 DISCERN 73.2 / 78.3 64.0 / 49.1 DGM 77.4 / 81.2 63.3 / 48.4 T-Reasoner 80.4 / 83.9 71.5 / 58.0 Table 4: Experimental results on the ShARC dataset. Numbers for the baseline models (Saeidi et al., 2018; Zhong & Zettlemoyer, 2019; Verma et al., 2020; Lawrence et al., 2019; Gao et al., 2020a;b; Ouyang et al., 2020) are borrowed from Ouyang et al. (2020). Decision Question Condition (micro / macro) (BLEU1 / 4) (F1) T5 63.7 / 68.2 57.3 / 48.2 44.0 DISCERN 74.9 / 79.8 65.7 / 52.4 55.3 DGM 78.6 / 82.2 71.8 / 60.2 57.8 T-Reasoner 79.8 / 83.5 71.7 / 60.4 69.2 a passage, a user question, and a user scenario which is expressed in a conversation history between a user and a machine. A model is expected to find an answer to the user’s question, or raise a clarification question for the unsatisfied conditions. Answers in this dataset are restricted to one of the following labels: “yes”, “no”, “inquire”, and “irrelevant”. The first three labels are equivalent to “satisfied”, “contradicted”, and “partially satisfied”. “irrelvant” is a new label that should be predicted if the conversation history and the question are irrelevant to the provided passage. This task of predicting answers is called “Decision Making” in their original ShARC paper (Saeidi et al., 2018) and evaluated as micro and macro accuracy. In addition to the “Decision Making” task, they consider another task “Question Generation” which is equivalent to predicting unsatisfied condition in T-Reasoner,6 evaluated with BLEU 1 and BLEU 4 scores. Compared to CondNLI, where conditions and their relationship are clearly mentioned in the context, conditions are embedded in the context in ShARC examples, e.g. Figure 1. Please see Appendix C for more information in data preparation. Baselines and Results We compare T-Reasoner to several strong baseline models, including the previous state-of-the-art models, DISCERN (Gao et al., 2020b) and DGM (Ouyang et al., 2020). Different from the baseline models, which use pipeline systems to separately predict answer labels and unsatisfied conditions, T-Reasoner performs the two tasks jointly. The results are shown in Table 4. T-Reasoner outperforms the previous baselines by 3 points on the “Decision Making” task and more than 8 points on the “Question Generation” task. T-Reasoner also significantly outperforms other baseline models (Saeidi et al., 2018; Zhong & Zettlemoyer, 2019; Verma et al., 2020; Lawrence et al., 2019; Gao et al., 2020a;b; Ouyang et al., 2020). Ablation: Condition Accuracy One problem with the ShARC Question Generation task is that only one of the unsatisfied conditions is annotated, even though multiple unsatisfied conditions exist. To further evaluate T-Reasoner’s performance in predicting all unsatisfied conditions, we manually annotate the logical operations in 20 contexts that have more than one condition (857 data total),7 and use the annotated logical operations to find all unsatisfied conditions. We report the F1 of the predicted unsatisfied conditions (see Table 5). Compared to the baselines (Gao et al., 2020b; Ouyang et al., 2020), T-Reasoner improves the F1 by 11.4 points. Ablation: Label Accuracy v.s. Conditions We additionally measure the accuracy versus the number of conditions in the context. Results in Table 6 show that the improvement in T-Reasoner’s performance over the previous state-of-the-art model (DGM) mostly comes from questions that have more than one condition. 4.3 CONDITIONALQA Dataset In the third experiment, we run T-Reasoner on ConditionalQA (Sun et al., 2021a), which contains longer context (documents), more conditions and more complex relationship between 6Unsatisfied conditions are then paraphrased into questions, e.g. “Aged 59 1/2 or older” is paraphrased to “Are you aged 59 1/2 or older?” 7Each context in ShARC has 32.9 data on average. conditions. Furthermore, the ConditionalQA dataset contains a mixture of “Yes”/“No” questions and questions with free-form answers. Please see Appendix B for details on data preparation. Evaluation As introduced in ConditionalQA (Sun et al., 2021a), predictions are evaluated in two sets of metrics: EM/F1 and conditional EM/F1. EM/F1 are the traditional metrics that measures the accuracy of predicted answer spans. Conditional EM/F1 is a novel metric introduced by Sun et al. (2021a), that jointly measures the accuracy of answer spans and unsatisfied conditions. Please refer to the ConditionalQA paper (Sun et al., 2021a) for more information. Briefly, the conditional EM/F1 is the product of the original answer EM/F1 and the F1 of the predicted unsatisfied conditions. The conditional EM/F1 is 1.0 if and only if the predicted answer span is correct and all unsatisfied conditions are found. If there’s no unsatisfied condition, the model should predict an empty set. Baselines and Results We compare T-Reasoner with several strong baselines, including ETC (in a pipeline) (Ainslie et al., 2020), DocHopper (Sun et al., 2021b), and T5 (with FiD) (Izacard & Grave, 2020). The ETC pipeline first extracts possible answers from the context and then predict unsatisfied conditions independently. DocHopper is a multi-hop retrieval system that iteratively retrieves evidence which contains answers and unsatisfied conditions. T5 (w/ FiD) is a encoderdecoder model. We train T5 (w/ FiD) to generate answers followed by a list of unsatisfied conditions ids. The experimental results are presented in Table 7. T-Reasoner significantly outperforms the baselines in predicting answers and jointly predicting answers and unsatisfied conditions – a relative improvement of 148% (Conditional) and 27.8% (Overall) in conditional F1 (F1 w/ conds). Ablation: Condition Accuracy Since there’s not a metric that only measures the quality of predicted conditions, we additionally report the F1 of the predicted unsatisfied conditions (Table 2). The best baseline models, T5 (w/ FiD), rarely predicts any conditions. Even though we train T5 (w/ FiD) only on the subset of questions that have conditional answers to force it predict unsatisfied conditions, its performance slightly improves but is still much lower than T-Reasoner by 16.5 points in condition F1. 5 CONCLUSION We study the problem of scenario-based QA in which questions are accompanied by incomplete scenarios and models are asked to find answers that are consistent with the provided user scenario. Models are further asked to identify unsatisfied conditions that are necessary for the predicted answers. We propose a system, T-Reasoner, that contains an entailment module to check whether a condition has been satisfied and a jointly trained reasoning module to verify the status of condition groups and predict unsatisfied conditions. T-Reasoner shows excellent reasoning ability, and can easily generalize to more conditions on a synthetic dataset CondNLI. Furthermore, T-Reasoner achieves state-of-the-art performance on two challenging scenario-based QA datasets ShARC (Saeidi et al., 2018) and ConditionalQA (Sun et al., 2021a). 6 ETHICAL STATEMENT Experiments in this paper are performed on publicly available datasets for academic research purposes. No real user data is used in the experiments. Even though the proposed model could be applied to many real world problems to help users answer their questions, the accuracy of the proposed work is still limited and predictions may be misleading. Please carefully evaluate the performance before applying it to real problems. 7 REPRODUCIBILITY STATEMENT Datasets and codes will be released upon the acceptance of this paper, including all scripts for constructing the proposed synthetic dataset and the preprocessing script for the two benchmark QA datasets. Models are trained on public available data. All results are reproducible. A CONDNLI DATASET CONSTRUCTION We first construct templates for the CondNLI examples and then instantiate the variables in the template with real NLI examples. Construct Templates We use capital letter A, B, . . . to represent conditions and lower-cased letters a, b, . . . to represent the corresponding facts. We use another few letters X , Y , . . . to represent the statements of conclusion, and lower-cased letters x, y, . . . to represent questions. Conditions are grouped together under a logical operator that specifies the relationship between conditions. For example, a logical operator “all” specifies that all conditions in the group must be satisfied in order to make the condition group satisfied. Here, we consider four types of logical operations to construct this synthetic dataset: • “all”: all conditions under this logical type should be satisfied in order to make the answer true. • “any”: only requires one of the conditions under the logical type “any” to be satisfied. It doesn’t matter whether other conditions have been satisfied, contradicted, or not mentioned in the question. • “required”: This is a special case of “all” / “any” when there is only one condition. Conditions with the logical type “required” must be satisfied. • “optional”: Conditions have the type “optional” if they are not relevant to the question. We pair a condition group with a conclusion statement and get a logical statement “If all (A, B), then X”. To challenge models’ ability in identifying relevant conditions from context, we add a few distracting statement that leads to different conclusions, e.g. “If all (not C, D), then Y ”. An example of a context template is shown in Table 9. Facts are constructed by randomly sampling a subset from all possible facts {a, b, . . . }. A question is sampled from possible questions {x, y, . . . }. We then compute the answer (and unsatisfied conditions if any) from the context, facts, and the question. Generate Examples For a templates with variables A, B, X , Y , . . . , a, b, x, y, . . . , we instantiate the variables with NLI examples to get the real data. We use the premises of original NLI examples for conditions and conclusions, i.e. capital letter variables, and the hypothesis for question and facts, i.e. lower-case variables. Note that sampling requires matching the entailment state of conditions, e.g. “not d” requires sampling from NLI examples that are labeled as “contradict”. We restrict the number of conditions in the context to 6 and randomly generate 65 distinct templates.8 During training, we randomly pick a template and instantiate it with NLI examples to generate real training examples. This random generation process enables creating (almost) unlimited amount of training data. We randomly generate another 5000 examples for development and testing. Quality Assessment Training and validation data in CondNLI are generated from NLI examples in the training and validation split of the MNLI dataset, respectively. This ensures that NLI examples used in validation are not exposed at training time. We control the generation process to ensure that the automatically generated data are balanced in terms of answer labels, logical types of interacting conditions, and number of conditions included in scenarios. Results are shown in Table 10. We additionally require scenarios must have at least 4 conditions to avoid overly simple examples. We additionally measure the Jaccard distance between premises and hypotheses of the NLI examples used in constructing the CondNLI dataset. The token-level Jaccard distance is 27.2. Even though token-level overlap exists, a model still needs to understand the semantic relationship between premises and hypotheses to predict their entailment status. B CONDITIONALQA EXPERIMENT DETAILS An example in the ConditionalQA dataset provides a parsed web page as context. It also provides a question, and a user scenario that is relevant to the context. We prepend the user scenario to the question as input to the model. The context in ConditionalQA is provided as a list of HTML elements. We treat each element at the leaf of the DOM tree as a condition ci, and prepend all its parents (from its direct parent to the root) to get an expanded condition si. Since we need the decoding module to generate answer spans, we initialize the model with T5, i.e. we use parameters from the encoder to initialize the entailment module, and use decoder to initialize the decoding module. The reasoning module is randomly initialized. C SHARC EXPERIMENT DETAILS Different from ConditionalQA, where each sentence in the context is treated as a condition, conditions in the ShARC dataset are shorter and are sometimes short phrases (sub-sentence). For example, the context “If you are a female Vietnam Veteran with a child who has a birth defect, you are eligible for ...” contains two conditions, “If you are a female Vietnam Veteran” and “with a child who has a birth defect”.9 In order to handle sub-sentence conditions, we follow the strategy proposed in two 8Restricting the number of conditions is only for the purpose of reducing training complexity. The experiment in Figure 3 (left) shows the model’s capability of generalizing to more conditions. 9It is arguable that this could be generally treated as one condition, but it is treated as two conditions with the logical operator “all” in the ShARC dataset. of the baseline models, DISCERN Gao et al. (2020b) and DGM Ouyang et al. (2020), that split a sentence into EDUs (Elementary Discourse Units) using a pretrained discourse segmentation model Li et al. (2018). The discourse segmentation model returns a list of sub-sentences, each considered as a condition. While we could treat each condition independently as we did previously for other datasets, the segmented EDUs are different in that they are not full sentences and may not retain their semantic meaning. Thus, we consider using the full context (usually less than 512 tokens) as the contextual information for condition ci, i.e. the expanded condition si includes the full context, but the condition ci is highlighted using the special tokens <CDT> and <\CDT>. We do not need the decoding module for the ShARC dataset, so we can safely discard it. We initialize the entailment module with ELECTRA (Clark et al., 2020a). The previous state-of-the-art baselines (Ouyang et al., 2020; Gao et al., 2020b) use ELECTRA to initialize their model. We use the same pretrained checkpoint to make a fair comparison. For the question generation task, we use the same input s as in decision making, except that we replace the prefix “condition:” with “unsatisfied condition:” for “unsatisfied” conditions. We fine-tune a T5 model for question generation. D DATASET STATISTICS Dataset statistics are shown in Table 11.
1. What is the main contribution of the paper in question-answering? 2. What are the strengths of the proposed model, particularly in its components? 3. What are the weaknesses of the paper regarding its experiments and error analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a model to tackle scenario-based question-answering to predict the answer to a question along with unsatisfied conditions for the given user scenario. The proposed model comprises 3 components, an entailment module (to identify the condition), a reasoning module (that decides whether or not the conditions have been satisfied) and a decoding module (that outputs the answer spans for free-form questions). The proposed model outperforms the baselines on several datasets. Strengths And Weaknesses Strengths: The proposed approach is quite interesting. The model outperforms the baselines on multiple datasets. Weaknesses: It would be interesting to add a baseline where the entailment module is trained and evaluated separately (possibly through silver training data generated by generating the training labels using any existing entailment model. It would be good to conduct a qualitative analysis (with examples) and a thorough error analysis showing how each component performs, how errors propagating from one module affects other modules, etc. The write-up needs improvements since there are several typos (e.g., special special, "regardless the status" --> "regardless of the status", "These information" --> "this information". Clarity, Quality, Novelty And Reproducibility The code will be available so the results should reproducible if the authors specify all hyper-parameters used. The proposed model is quite novel and improves over the baselines for
ICLR
Title Scenario-based Question Answering with Interacting Contextual Properties Abstract In the scenario-based Question Answering (QA) task, models are asked to find answers that are appropriate to the user scenarios associated with the question and identify information that is missing from the scenarios but is necessary for the answers to hold. Scenarios commonly include multiple properties of users, such as age, employment status, and income level for the question “How much can I claim from this benefit”. The properties relevant to a potential answer are given in a document, which will state conditions necessary for the answer to hold. Documents also may specify how conditions interact with each other, e.g. with text like “one of the conditions below must apply”. Although understanding the relationship between conditions is crucial for solving this challenging QA task, limited work has been done so far in modeling this. In this paper, we propose the T-Reasoner model, which solves this problem with three jointly learned modules: an entailment module which checks whether a condition has been satisfied by the scenario, a decoding module which locates eligible answers from documents, and a reasoning module which infers the relationship between conditions and performs a reasoning step to determine the logically consistent answers and identify missing conditions. T-Reasoner outperforms strong baselines on a synthetic scenariobased QA dataset and achieves a new state-of-the-art on two scenario-based QA benchmarks, outperforming the prior best models by 3-10 points. 1 1 INTRODUCTION Many questions can only be answered correctly after some context for the question is supplied or inferred: e.g., “When is the next LA Lakers home game” needs temporal context, and “Where is the closest pizza place” needs geographical context. Prior work on contextual QA (Zhang & Choi, 2021; Dhingra et al., 2021; Kasai et al., 2022; Chen et al., 2021) has focused on tasks in which context is important, but limited: generally a small number of properties of the user that posed the question need be considered (e.g., location and time). However, many important questions depend on many more properties of the user. In this paper we consider scenario-based QA, in which questions are augmented with a textual “scenario” that describes some properties of the user. For example, in Figure 1 a user has posed a question “how much support am I eligible for?” , and the answer depends on multiple user properties (namely, their relationship with deceased, and whether they or other relatives have claimed other benefits.) Having multiple contextual properties means these properties can interact. For example, in Figure 1 the answer depends on a conjunction of conditions (e.g. “if both” in Scenario 1) and also a disjunction of conditions (e.g. either being a “relative” or a “close friend” in Scenario 2). In our benchmarks, scenarios are informative but not complete, so the goal of the system is to identify possible answers—i.e., answers that are logically consistent with the scenario—as well as any conditions that necessary for the answer to hold which are not entailed by the scenario. For example, in Figure 1 Scenario 1, the system should provide the answer “up to $1200” but must also note that the condition “you didn’t claim other benefits” is required by the answer, and not entailed by the scenario. We refer to such conditions as unsatisfied conditions. This task is challenging because in addition to finding eligible answers from documents, it also requires models to perform two non-trivial reasoning tasks. First, it must understand the document well enough to understand conditions given as 1Codes and data are available at https://github.com/haitian-sun/T-Reasoner. context for the answer (each property that may affect the answer is considered as a condition), and the logical relationship between these conditions. For example, in Figure 1 Scenario 1, it requires both “the partner of the deceased...” and “you didn’t claim other benefits” to be satisfied (i.e. conjunction), while it requires either a “relative” or “close friend” (i.e. disjunction) in Scenario 2. Second, a model must identify which conditions are entailed by information provided in user scenarios, which are contradicted, and which are not mentioned but are required to support an eligible answer. Previous work by Clark et al. (2020b) has shown that pretrained Language Models (LMs), e.g. RoBERTa (Liu et al., 2019), can be finetuned to perform a similar reasoning task over hypothetical statements, i.e. “if A and B then C”. However, conditions used in their experiments are over simplified and sometimes semantically incorrect, e.g. A = “Mike is strong” and B = “Cindy is green”. Furthermore, languages used to described the relationship between conditions are easy, and the number of conditions involved in the reasoning process is small. All factors above make the proposed task easy for existing models (Liu et al., 2019; Raffel et al., 2019), but under-represents the challenges exists in real problems that require reasoning with logically interacting conditions. Furthermore, previous work (Clark et al., 2020b) makes an assumption that every conditions must be either satisfied or contradicted by the evidence provided in questions. As a result, no “unsatisfied condition” is required in predictions. We do not make such assumption, but instead only provide evidences for a subset of conditions, and ask models to predict a logically consistent answer and identify conditions that are required but not yet satisfied, i.e. unsatisfied conditions. Indeed, experiments (Sun et al., 2021a) show that pretrained language models (LMs), e.g. T5 (Raffel et al., 2019), struggle to predict unsatisfied conditions. Even though an additional module is specifically trained to predict unsatisfied conditions (Gao et al., 2020b; Ouyang et al., 2020), their performance is still limited. We propose a simple yet effective model, T-Reasoner, which models the relationship between conditions and performs the reasoning task to verify answers that are consistent with user scenarios and identify conditions that are unsatisfied. T-Reasoner contains three main modules, an entailment module, a reasoning module, and a decoding module, which are jointly trained. The entailment module predicts whether conditions have been entailed or contradicted by users’ scenarios. The reasoning module infers the relationship between conditions then performs a reasoning step to decide whether the provided information in user scenarios is sufficient and to identify unsatisfied conditions otherwise. If the answer is a free-form text span, T-Reasoner additionally uses a generation module to predict the answer span. T-Reasoner shows excellent reasoning ability on a synthetic dataset and outperforms the previous state-of-the-art models on two Question Answering (QA) datasets, ConditionalQA and ShARC (Sun et al., 2021a; Saeidi et al., 2018), improving the state-of-the-art by 3-10 points on answer and unsatisfied condition prediction tasks. 2 RELATED WORK The task proposed by Clark et al. (2020b) is commonly referred to as deductive reasoning where all information required to find a definite answer is provided. Other models have been developed for deductive reasoning with symbolic rules (Cohen, 2016; Cohen et al., 2020; Sun et al., 2020; Ren et al., 2020; Ren & Leskovec, 2020). Embedding-based methods (Sun et al., 2020; Ren et al., 2020; Ren & Leskovec, 2020) first convert symbolic facts and rules to embeddings and then apply neural network layers on top to softly predict answers. These models differ from our work in that the symbolic structure of the rules is typically known, whereas in our model it is implicit in a document. Other recent work in deductive reasoning focused on tasks where rules and facts are expressed in natural language (Talmor et al., 2020; Saeed et al., 2021; Clark et al., 2020b; Kassner et al., 2020). Such tasks are more challenging because the model has to first understand the logic described in the natural language sentences before performing logical reasoning. Many of these models rely on rules that are produced by templates, or templated rules that have been paraphrased by crowd workers. In our work, the logical interactions analogous to these rules are implicit in real-world documents. Different from most reasoning tasks, the task considered in this paper provides a list of conditions that, if true, would support an answer. Identifying such conditions is usually called abductive reasoning, as opposed to deductive reasoning. Very limited work has explored abductive reasoning for QA. Previous work (Gao et al., 2020a;b; Ouyang et al., 2020) on the ShARC (Saeidi et al., 2018) dataset propose to solve this problem by predicting a special label “inquire” if there was not enough information to make a definite prediction. Specifically, EMT and DISCERN (Gao et al., 2020a;b) computed an entailment vector for each condition and performed a weighted sum of those vectors to predict the final answer. DGM (Ouyang et al., 2020) additionally introduced a GCN-based model to better represent the entailment vectors. Even though these models were able to predict the answer labels as “inquire” when there were unsatisfied conditions, none of them predict which conditions needed to be further satisfied, unlike our model. Our model is also more scalable than these, as it does not require concatenating a full context and a question. 3 MODEL 3.1 TASK: QA WITH CONDITIONS The scenario-based QA task requires models to find answers that are logically consistent with the provided user scenarios which are potentially incomplete. In this paper, we consider this task in the reading comprehension (RC) setting in which a passage that contains relevant information about the question is provided. We leave the open-domain setting of this problem for future work. Specifically, a model takes a question, a scenario, and a passage that contains answers and conditions as input and predicts logically consistent answers and their unsatisfied conditions. Let’s consider a passage that contains a set of conditions C = {c1, . . . , cn} and the set of eligible answers for a question under all possible combinations of conditions A = {a1, . . . , am}. Each answer ai ∈ A is restricted by a subset of conditions Ci ⊆ C. Conditions in Ci interact with each other under relationship Ri (Ri is an abstract set which will not be explicitly expressed). A condition group, Gi = (Ci, Ri) is a pair of Ci and Ri, which describes in what scenario the answer ai is correct. Note that the list of answers A, sets of conditions Ci’s and their relationship Ri’s are not explicitly provided in training and testing examples – models have to generate them from the passage. We say that a condition group Gi is satisfied if its underlying logical statement that consists of Ci and Ri has been satisfied by the scenario, for example, in Scenario 2 in Figure 1 where the condition group for “up to $800” has been satisfied. Besides being satisfied, a condition group Gi has two more possible outcomes: (1) G is partially satisfied if some of the conditions have been satisfied but there is still some information missing so the answer is not fully supported, e.g. the condition group of the answer “up to $1200” in Scenario 1 (Figure 1), and (2) Gi is contradicted if one or more conditions in the group are contradicted which causes the answer ineligible, e.g. the condition group of the answer “up to $1200” in Scenario 2 (Figure 1). An answer ai is logically consistent with the scenario if the underlying condition group Gi is satisfied or partially satisfied. We denote the set of logically consistent answers à ⊆ A. The set à contains zero or more answers – the set à is empty if none of the answers in A is logically consistent with the user scenario. A model should predict an answer from à if à is not empty, and mark the question as not answerable, otherwise.2 In addition to predicting logically consistent answers, we also perform the task of finding unsatisfied conditions C̃i. The set C̃i should be concise, i.e. it should only include the conditions that are necessary. For example, the condition “have worked for more than 4 years” is not an unsatisfied condition because whether it has been satisfied or not won’t affect the output of the condition group. In summary, we evaluate a model’s prediction of a logically consistent answer ai ∈ à and the set of unsatisfied conditions C̃i for answer ai, i.e. (ai, C̃i). Answers and unsatisfied conditions in the output are jointly evaluated.3 This task specifically challenges models’ ability in understanding the relationship between conditions and performing logical reasoning process accordingly. We will introduce a simple and effective model, T-Reasoner, to tackle this challenging reasoning task. 3.2 MODEL In this section, we will discuss T-Reasoner which consists of an entailment module, a reasoning module, and optionally a decoding module, to perform this challenging QA task in embedding space. Input The model, T-Reasoner, takes a question q with scenario e and a passage p as inputs and predicts an answer ai that is logically consistent with the user scenario and a list of unsatisfied conditions C̃i. Since the list of all conditions C for the question are not provided in the example, we chunk the passage p into pieces and consider each piece of text as a condition ci. Conditions obtained this way may be irrelevant to questions. We rely on the entailment module (see next) to decide whether a condition ci is relevant and what is its relationship with others. The chunking strategy may be different for different datasets. Please see §4.2 and §4.3 for more information. Briefly, passages are usually chunked into sentences, short passages with 2-3 sentences, or sub-sentences (text phrases). Entailment Module We apply an entailment module to check whether each condition ci ∈ C have been entailed by the user scenario. Each condition ci is checked independently, as opposed to concatenating all conditions into a long input and checking them all at once. This strategy significantly reduce the computation cost compared to checking all conditions at once, especially if context is long, e.g. legal documents which are tens or hundreds of pages long (see examples in 4.3). Specifically, 2An oracle model should be able to predict all answers from Ã. We consider a slightly simplified setting in this paper in which a model is only required to predict one of the answers. In our experiments, the ShARC (Saeidi et al., 2018) dataset only contains questions that have a single answer, i.e. |Ã| = 1. The ConditionalQA (Sun et al., 2021a) dataset contains questions that have multiple answers, |Ã| > 1, so the performance will be sacrificed. We leave the task of predicting all logically consistent answers as future work. 3Evaluation metrics are different in different datasets Sun et al. (2021a); Saeidi et al. (2018). Please refer to §4.3 and 4.2 for more details. the computation complexity of our approach is O(|C|) where |C| is the number of total conditions, compared to a complexity of O(|C|2) otherwise. This independent checking strategy, however, separates each condition from its context and thus causes a lost of contextual information for conditions ci and eventually negatively impacts the model’s performance. Thus, we extend a condition ci by adding tokens from its surroundings. For example, the condition “the partner of the deceased when they died” is expanded to “... up to $1200 may be awarded if both: <CDT> the partner of the deceased when they died <\CDT> you didn’t claim ...”, where <CDT> and <\CDT> are two special tokens that mark the beginning and end of the condition ci. Apart from making a condition ci more fluent and coherent, the added contextual tokens also make it easier to understand the relationship between the current condition ci and other conditions in its neighbours. We may additionally add page titles, section titles, prompts of list items, or table headings, etc., if applicable to the expanded conditions. Please see §4.3 and 4.2 for more details. We denote conditions with extended contextual information as si for condition ci. We learn a Transformer model for the entailment module which takes an expanded condition si and the question q and scenario e as input, and returns a list of vectors si,hi,1, . . . ,hi,m. The first vector si is a summarization vector which includes several aspects of information: (1) whether the underlying condition ci has been satisfied, contradicted, or not mentioned by the user scenario, (2) whether the condition ci is relevant to the question, and (3) if relevant, what is its relationship with other conditions in its neighbours. These information will be used for reasoning in the future layers. Embeddings hi,1, . . . ,hi,m are token embeddings that will used for decoding if needed. Please see the description of the reasoning module for more information. si,hi,1, . . . ,hi,m = Entail(si, e, q) (1) One may consider supervising this entailment module by adding classification layers on si to explicitly predict the entailment status of condition ci and its relationship with other conditions. However, obtaining supervision labels for these auxiliary tasks can be challenging as they are often not provided in the example. Fortunately, we show that our proposed model, T-Reasoner, can be trained end-to-end, without such intermediate supervision. Decoding Module The decoding module generates an eligible answer âi which is potentially logically consistent to the question. The generated answer âi will not be returned until the status of its condition group Ĝi is verified by the reasoning module (discussed below). The decoding module is analogous to FiD (Izacard & Grave, 2020), i.e. token embeddings from different conditions (which are encoded separately) are concatenated for decoding. Different from Izacard & Grave (2020) which was applied to independently retrieved passages for open-domain QA, the decoding module in T-Reasoner is used on coherent content, i.e. conditions from the same passage. The contextual information in the expanded condition si helps connect conditions that are separately encoded. The decoding module takes token embeddings for all conditions h1,1, . . . ,hn,m computed from Eq. 1 to generate answer spans. The generation task is trained with teacher forcing. We do not write out the teacher forcing decoding loss ldecode here. Please refer to the T5 paper (Raffel et al., 2019) for more information. If questions have multiple logically consistent answers, i.e. à > 1, we randomly select an answer ai ∈ à as the label to train the decoding module. âi = Decode(h (1) 1,1, . . . ,h (n) kn,m ) (2) We consider two different types of answers: “Yes”/“No” or free-form answers. In the first case, we simply let the model generate a special a special token [YESNO] and consider the reasoning result from the reasoning module (see next) as the answer, i.e. the answer is “Yes” if the condition group is satisfied (or partially satisfied) or “No” if contradicted. Since some datasets only contain “Yes”/“No” questions, we can then safely discard the decoding module for these datasets. In the second case, i.e. answers are free-form text spans, we will return generated spans as answers only if their condition groups have been verified as satisfied or partially satisfied by the reasoning module. If the condition group is contradicted, we will mark the question as not answerable. Reasoning Module The reasoning module combines the local relationship between conditions from their embeddings s1, . . . , sn and performs a logical reasoning process to decide the reasoning result for a condition group Gi for the generated answer âi and to identify unsatisfied conditions C̃i. The input to the reasoning module is a list of vectors, s1, . . . , sn for conditions c1, . . . , cn, that are output by the entailment module (Eq. 1). We use another Transformer model as our reasoner, because Transformers have the self attention mechanism which allows conditions {s1, . . . , sn} to attend to each other, so the reasoning result of a condition group can be summarized. This is crucial because, for example, if one of the conditions in a disjunction group is satisfied, the condition group will be automatically satisfied regardless the status of other conditions in the same group. We prepend a trainable vector s0 to the list of condition embeddings to summarize the reasoning result. The output vectors ŝ0, ŝ1, . . . , ŝn will be used to predict the status of the condition group and the unsatisfied conditions for the generated answer. The first vector ŝ0 will be used to predict the reasoning result of the condition group. If the condition group is partially satisfied, we use the rest of vectors, ŝ1, . . . , ŝn, to identify unsatisfied conditions. We compute loss on both reasoning and unsatisfied condition predictions. Let Ir and Ic be the one-hot labels for the two tasks. ŝ0, ŝ1, . . . , ŝn = Reason(s0, s1, . . . , sn) lreason = softmax_cross_entropy(WTl ŝ0, Ir) lcond = softmax_cross_entropy(WTc ŝi, Ic) As discussed above (§3.1), the reasoning results of condition groups have three possible outcomes: “satisfied”, “partially satisfied”, and “contradicted”. We merge the first two into one label “satisfied”, and differentiate them by whether unsatisfied conditions exist, i.e. r ∈ {satisfied, contradicted} and its one-hot label Ir ∈ {0, 1}2.4 Labels for conditions are “entailed”, “contradicted”, “not mentioned”, “implied”, and “unsatisfied”, i.e. Ic ∈ {0, 1}5. The first three labels are as they are named. The label “implied” means a condition is implied by other conditions in the condition group. For example, if one of the conditions in a disjunction group has been satisfied, the rest of conditions are “implied”. The class “unsatisfied” means it is an unsatisfied condition which must be returned together with the predicted answer. The labels may not apply to all datasets, e.g. ConditionalQA (Sun et al., 2021a) only annotates two labels “unsatisfied” vs. others, we will make changes to the loss function accordingly. Loss Function We jointly train the entailment module and reasoning module. The final loss function is the sum of the answer loss lreason and the condition entailment loss lcond. If the answers contain text spans, we jointly train the decoding module ldecode as well. l = lreason + lcond l = lreason + lcond + ldecode 3.3 FINETUNE PRETRAINED CHECKPOINTS The entailment module and decoding module (if adopted) load pretrained LM checkpoints, e.g. T5 (Raffel et al., 2019) and BART (Lewis et al., 2019). The pretrained parameters are loaded for the entailment module and then finetuned for downstream tasks. The reasoning module is randomly initialized and jointly trained with other modules. The number of Transformer layers in the reasoning module is a hyper-parameter. We choose the number of layers l = 3 or l = 4. Please see §4.1 for ablation study on the number of Transformer layers for the reasoning task. The decoding module is also finetuned. If a decoding module is needed, we will initialize the entailment and decoding module from the same pretrained checkpoint. 4 EXPERIMENTS We experiment with T-Reasoner on a synthetic dataset, CondNLI, and two benchmark QA datasets, ConditionalQA (Sun et al., 2021a) and ShARC (Saeidi et al., 2018), for scenario-based QA task. 4.1 CONDNLI Dataset The synthetic CondNLI dataset is derived from an existing Natural Language Inference (NLI) dataset, MultiNLI (Williams et al., 2018). An original NLI example contains a premise and a hypothesis, and a label indicating whether the premise is entailed or contradicted by the hypothesis. We treat premises in NLI examples as conditions and hypotheses as facts provided in user scenarios. 4Some tasks have an additional class “irrelevant” because some questions in the dataset are not relevant to the provided passages, i.e. Ir ∈ {0, 1}3. not “Has two children”, “Has not applied before.” ] then “Waive the application fees”. Question: Is “Eligible for $60 a week” correct? Scenario: [“65 years old”, “Rejected last year”] Answer: Yes, [“Employed for two years”] Table 1: An example in CondNLI. The answer is “Yes” with unsatisfied conditions [“Employed for two years”]. An example is shown in Table 1. The example contains four conditions, among which “Aged 59 1/2 or older” and “Employed for two years” belong to a condition group under a logical reasoning type “all”, indicating that both conditions have to be satisfied in order to “Get at least $60 a week”. The answer statement, e.g. “Get at least $60 a week”, also comes from NLI examples. We treat the premise of an NLI example as an answer statement and the corresponding hypothesis as the question, e.g. is “Eligible for $60 a week” correct? In addition to the condition group and the answer statement that is relevant to the question, we add a few more condition groups as distractors to make the constructed dataset more challenging. Please see Appendix A for more information in dataset construction. Baselines Previous work (Clark et al., 2020b) showed that pretrained Transformer-based Language Models, e.g. RoBERTa (Liu et al., 2019), have the ability to reason over multiple conditions to answer a reasoning question in the deductive reasoning setting, e.g. “if A and B then C” with facts on both conditions A and B provided. However, examples in CondNLI are usually longer and won’t fit into RoBERTa’s memory. Equivalently, we experiment with two other language models, T5 (Raffel et al., 2019) (with the FiD strategy (Izacard & Grave, 2020) to adapt to longer input) and ETC (Ainslie et al., 2020), on the CondNLI dataset.5 In ETC, we use the global tokens to predict unsatisfied conditions. In T5, To simplify the generation task, we assign an id to each condition and let FiD generate unsatisfied condition ids. We also compare T-Reasoner with T5 on inputs that contains more conditions to test their generalization ability. Results The experiment results are shown in Table 2. We measure both the accuracy of label prediction and the F1 of unsatisfied conditions. The results show that T-Reasoner performs significantly better than pretrained LMs, T5 and ETC, in both predicting correct answers (Ans) and unsatisfied conditions (Conds) on CondNLI. We additionally test T-Reasoner’s ability in generalizing to more conditions. We train TReasoner on templates with 6 conditions or fewer and test it on the examples with more than 6 conditions. Figure 3 (Left) shows the change of performance in both label classification and unsatisfied condition prediction tasks as the number of conditions increase. We observe some decrease in performance in both tasks, but it is still reasonable with 20 conditions. Furthermore, we experiment with different numbers of layers in the reasoning module (Right). The Transformer-based reasoning module needs at least 3 layers for the reasoning task, especially for predicting unsatisfied conditions. 4.2 SHARC Dataset In the second experiment, we run T-Reasoner on a real scenario-based QA dataset, ShARC (Saeidi et al., 2018), that has complex passages and many conditions. An example in ShARC contains 5Examples in CondNLI exceeds the limit of 512 tokens in RoBERTa. Decision Question (micro / macro) (BLEU1 / 4) CM 61.9 / 68.9 54.4 / 34.4 BERTQA 63.6 / 70.8 46.2 / 36.3 UcraNet 65.1 / 71.2 60.5 / 46.1 Bison 66.9 / 71.6 58.8 / 44.3 E3 67.7 / 73.3 54.1 / 38.7 EMT 69.1 / 74.6 63.9 / 49.5 DISCERN 73.2 / 78.3 64.0 / 49.1 DGM 77.4 / 81.2 63.3 / 48.4 T-Reasoner 80.4 / 83.9 71.5 / 58.0 Table 4: Experimental results on the ShARC dataset. Numbers for the baseline models (Saeidi et al., 2018; Zhong & Zettlemoyer, 2019; Verma et al., 2020; Lawrence et al., 2019; Gao et al., 2020a;b; Ouyang et al., 2020) are borrowed from Ouyang et al. (2020). Decision Question Condition (micro / macro) (BLEU1 / 4) (F1) T5 63.7 / 68.2 57.3 / 48.2 44.0 DISCERN 74.9 / 79.8 65.7 / 52.4 55.3 DGM 78.6 / 82.2 71.8 / 60.2 57.8 T-Reasoner 79.8 / 83.5 71.7 / 60.4 69.2 a passage, a user question, and a user scenario which is expressed in a conversation history between a user and a machine. A model is expected to find an answer to the user’s question, or raise a clarification question for the unsatisfied conditions. Answers in this dataset are restricted to one of the following labels: “yes”, “no”, “inquire”, and “irrelevant”. The first three labels are equivalent to “satisfied”, “contradicted”, and “partially satisfied”. “irrelvant” is a new label that should be predicted if the conversation history and the question are irrelevant to the provided passage. This task of predicting answers is called “Decision Making” in their original ShARC paper (Saeidi et al., 2018) and evaluated as micro and macro accuracy. In addition to the “Decision Making” task, they consider another task “Question Generation” which is equivalent to predicting unsatisfied condition in T-Reasoner,6 evaluated with BLEU 1 and BLEU 4 scores. Compared to CondNLI, where conditions and their relationship are clearly mentioned in the context, conditions are embedded in the context in ShARC examples, e.g. Figure 1. Please see Appendix C for more information in data preparation. Baselines and Results We compare T-Reasoner to several strong baseline models, including the previous state-of-the-art models, DISCERN (Gao et al., 2020b) and DGM (Ouyang et al., 2020). Different from the baseline models, which use pipeline systems to separately predict answer labels and unsatisfied conditions, T-Reasoner performs the two tasks jointly. The results are shown in Table 4. T-Reasoner outperforms the previous baselines by 3 points on the “Decision Making” task and more than 8 points on the “Question Generation” task. T-Reasoner also significantly outperforms other baseline models (Saeidi et al., 2018; Zhong & Zettlemoyer, 2019; Verma et al., 2020; Lawrence et al., 2019; Gao et al., 2020a;b; Ouyang et al., 2020). Ablation: Condition Accuracy One problem with the ShARC Question Generation task is that only one of the unsatisfied conditions is annotated, even though multiple unsatisfied conditions exist. To further evaluate T-Reasoner’s performance in predicting all unsatisfied conditions, we manually annotate the logical operations in 20 contexts that have more than one condition (857 data total),7 and use the annotated logical operations to find all unsatisfied conditions. We report the F1 of the predicted unsatisfied conditions (see Table 5). Compared to the baselines (Gao et al., 2020b; Ouyang et al., 2020), T-Reasoner improves the F1 by 11.4 points. Ablation: Label Accuracy v.s. Conditions We additionally measure the accuracy versus the number of conditions in the context. Results in Table 6 show that the improvement in T-Reasoner’s performance over the previous state-of-the-art model (DGM) mostly comes from questions that have more than one condition. 4.3 CONDITIONALQA Dataset In the third experiment, we run T-Reasoner on ConditionalQA (Sun et al., 2021a), which contains longer context (documents), more conditions and more complex relationship between 6Unsatisfied conditions are then paraphrased into questions, e.g. “Aged 59 1/2 or older” is paraphrased to “Are you aged 59 1/2 or older?” 7Each context in ShARC has 32.9 data on average. conditions. Furthermore, the ConditionalQA dataset contains a mixture of “Yes”/“No” questions and questions with free-form answers. Please see Appendix B for details on data preparation. Evaluation As introduced in ConditionalQA (Sun et al., 2021a), predictions are evaluated in two sets of metrics: EM/F1 and conditional EM/F1. EM/F1 are the traditional metrics that measures the accuracy of predicted answer spans. Conditional EM/F1 is a novel metric introduced by Sun et al. (2021a), that jointly measures the accuracy of answer spans and unsatisfied conditions. Please refer to the ConditionalQA paper (Sun et al., 2021a) for more information. Briefly, the conditional EM/F1 is the product of the original answer EM/F1 and the F1 of the predicted unsatisfied conditions. The conditional EM/F1 is 1.0 if and only if the predicted answer span is correct and all unsatisfied conditions are found. If there’s no unsatisfied condition, the model should predict an empty set. Baselines and Results We compare T-Reasoner with several strong baselines, including ETC (in a pipeline) (Ainslie et al., 2020), DocHopper (Sun et al., 2021b), and T5 (with FiD) (Izacard & Grave, 2020). The ETC pipeline first extracts possible answers from the context and then predict unsatisfied conditions independently. DocHopper is a multi-hop retrieval system that iteratively retrieves evidence which contains answers and unsatisfied conditions. T5 (w/ FiD) is a encoderdecoder model. We train T5 (w/ FiD) to generate answers followed by a list of unsatisfied conditions ids. The experimental results are presented in Table 7. T-Reasoner significantly outperforms the baselines in predicting answers and jointly predicting answers and unsatisfied conditions – a relative improvement of 148% (Conditional) and 27.8% (Overall) in conditional F1 (F1 w/ conds). Ablation: Condition Accuracy Since there’s not a metric that only measures the quality of predicted conditions, we additionally report the F1 of the predicted unsatisfied conditions (Table 2). The best baseline models, T5 (w/ FiD), rarely predicts any conditions. Even though we train T5 (w/ FiD) only on the subset of questions that have conditional answers to force it predict unsatisfied conditions, its performance slightly improves but is still much lower than T-Reasoner by 16.5 points in condition F1. 5 CONCLUSION We study the problem of scenario-based QA in which questions are accompanied by incomplete scenarios and models are asked to find answers that are consistent with the provided user scenario. Models are further asked to identify unsatisfied conditions that are necessary for the predicted answers. We propose a system, T-Reasoner, that contains an entailment module to check whether a condition has been satisfied and a jointly trained reasoning module to verify the status of condition groups and predict unsatisfied conditions. T-Reasoner shows excellent reasoning ability, and can easily generalize to more conditions on a synthetic dataset CondNLI. Furthermore, T-Reasoner achieves state-of-the-art performance on two challenging scenario-based QA datasets ShARC (Saeidi et al., 2018) and ConditionalQA (Sun et al., 2021a). 6 ETHICAL STATEMENT Experiments in this paper are performed on publicly available datasets for academic research purposes. No real user data is used in the experiments. Even though the proposed model could be applied to many real world problems to help users answer their questions, the accuracy of the proposed work is still limited and predictions may be misleading. Please carefully evaluate the performance before applying it to real problems. 7 REPRODUCIBILITY STATEMENT Datasets and codes will be released upon the acceptance of this paper, including all scripts for constructing the proposed synthetic dataset and the preprocessing script for the two benchmark QA datasets. Models are trained on public available data. All results are reproducible. A CONDNLI DATASET CONSTRUCTION We first construct templates for the CondNLI examples and then instantiate the variables in the template with real NLI examples. Construct Templates We use capital letter A, B, . . . to represent conditions and lower-cased letters a, b, . . . to represent the corresponding facts. We use another few letters X , Y , . . . to represent the statements of conclusion, and lower-cased letters x, y, . . . to represent questions. Conditions are grouped together under a logical operator that specifies the relationship between conditions. For example, a logical operator “all” specifies that all conditions in the group must be satisfied in order to make the condition group satisfied. Here, we consider four types of logical operations to construct this synthetic dataset: • “all”: all conditions under this logical type should be satisfied in order to make the answer true. • “any”: only requires one of the conditions under the logical type “any” to be satisfied. It doesn’t matter whether other conditions have been satisfied, contradicted, or not mentioned in the question. • “required”: This is a special case of “all” / “any” when there is only one condition. Conditions with the logical type “required” must be satisfied. • “optional”: Conditions have the type “optional” if they are not relevant to the question. We pair a condition group with a conclusion statement and get a logical statement “If all (A, B), then X”. To challenge models’ ability in identifying relevant conditions from context, we add a few distracting statement that leads to different conclusions, e.g. “If all (not C, D), then Y ”. An example of a context template is shown in Table 9. Facts are constructed by randomly sampling a subset from all possible facts {a, b, . . . }. A question is sampled from possible questions {x, y, . . . }. We then compute the answer (and unsatisfied conditions if any) from the context, facts, and the question. Generate Examples For a templates with variables A, B, X , Y , . . . , a, b, x, y, . . . , we instantiate the variables with NLI examples to get the real data. We use the premises of original NLI examples for conditions and conclusions, i.e. capital letter variables, and the hypothesis for question and facts, i.e. lower-case variables. Note that sampling requires matching the entailment state of conditions, e.g. “not d” requires sampling from NLI examples that are labeled as “contradict”. We restrict the number of conditions in the context to 6 and randomly generate 65 distinct templates.8 During training, we randomly pick a template and instantiate it with NLI examples to generate real training examples. This random generation process enables creating (almost) unlimited amount of training data. We randomly generate another 5000 examples for development and testing. Quality Assessment Training and validation data in CondNLI are generated from NLI examples in the training and validation split of the MNLI dataset, respectively. This ensures that NLI examples used in validation are not exposed at training time. We control the generation process to ensure that the automatically generated data are balanced in terms of answer labels, logical types of interacting conditions, and number of conditions included in scenarios. Results are shown in Table 10. We additionally require scenarios must have at least 4 conditions to avoid overly simple examples. We additionally measure the Jaccard distance between premises and hypotheses of the NLI examples used in constructing the CondNLI dataset. The token-level Jaccard distance is 27.2. Even though token-level overlap exists, a model still needs to understand the semantic relationship between premises and hypotheses to predict their entailment status. B CONDITIONALQA EXPERIMENT DETAILS An example in the ConditionalQA dataset provides a parsed web page as context. It also provides a question, and a user scenario that is relevant to the context. We prepend the user scenario to the question as input to the model. The context in ConditionalQA is provided as a list of HTML elements. We treat each element at the leaf of the DOM tree as a condition ci, and prepend all its parents (from its direct parent to the root) to get an expanded condition si. Since we need the decoding module to generate answer spans, we initialize the model with T5, i.e. we use parameters from the encoder to initialize the entailment module, and use decoder to initialize the decoding module. The reasoning module is randomly initialized. C SHARC EXPERIMENT DETAILS Different from ConditionalQA, where each sentence in the context is treated as a condition, conditions in the ShARC dataset are shorter and are sometimes short phrases (sub-sentence). For example, the context “If you are a female Vietnam Veteran with a child who has a birth defect, you are eligible for ...” contains two conditions, “If you are a female Vietnam Veteran” and “with a child who has a birth defect”.9 In order to handle sub-sentence conditions, we follow the strategy proposed in two 8Restricting the number of conditions is only for the purpose of reducing training complexity. The experiment in Figure 3 (left) shows the model’s capability of generalizing to more conditions. 9It is arguable that this could be generally treated as one condition, but it is treated as two conditions with the logical operator “all” in the ShARC dataset. of the baseline models, DISCERN Gao et al. (2020b) and DGM Ouyang et al. (2020), that split a sentence into EDUs (Elementary Discourse Units) using a pretrained discourse segmentation model Li et al. (2018). The discourse segmentation model returns a list of sub-sentences, each considered as a condition. While we could treat each condition independently as we did previously for other datasets, the segmented EDUs are different in that they are not full sentences and may not retain their semantic meaning. Thus, we consider using the full context (usually less than 512 tokens) as the contextual information for condition ci, i.e. the expanded condition si includes the full context, but the condition ci is highlighted using the special tokens <CDT> and <\CDT>. We do not need the decoding module for the ShARC dataset, so we can safely discard it. We initialize the entailment module with ELECTRA (Clark et al., 2020a). The previous state-of-the-art baselines (Ouyang et al., 2020; Gao et al., 2020b) use ELECTRA to initialize their model. We use the same pretrained checkpoint to make a fair comparison. For the question generation task, we use the same input s as in decision making, except that we replace the prefix “condition:” with “unsatisfied condition:” for “unsatisfied” conditions. We fine-tune a T5 model for question generation. D DATASET STATISTICS Dataset statistics are shown in Table 11.
1. What is the focus of the paper regarding question-answering scenarios? 2. What are the strengths of the proposed approach, particularly in its components and empirical advances? 3. What are the weaknesses of the paper, such as the need for clearer articulation of challenges or design choice justifications? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents a method for answering questions about scenarios -- questions for which there isn’t a fixed answer but is varied depending on additional conditions that are unstated in the text. Method: Given a scenario, a question, and a set of conditions (extracted from an input text) that are to be considered when answering the question, the method involves the following steps: Use an "entailment" module to create entailment related representations of the information in the scenario against each of the conditions to be considered. Use this representation to make decisions about which conditions are satisfied, contradicted, or of unknown status. The representations of the tokens in the conditions are also used to then generate the answer using a (standard) decoding module. Model: The entailment module is a pretrained NLI encoder. The decoder is a pre-trained BART generator. The “reasoner” is a set of transformer layers that consume the aggregate representations of each condition from the entailment model. All components are fine-tuned end-to-end with three losses the specifically test for ability to produce correct answer, predict whether all conditions are satisfied, and predict for each individual condition whether it is satisfied, implied, contradicted, or not discussed etc. Evaluations: The paper evaluates the proposed model on three datasets, one derived from the multi-NLI dataset and two Scenario QA datasets. Key Contributions: The main contribution is in putting together the system with components that are tied to the steps involved in the process and a demonstration of the utility of this method on real datasets. Strengths And Weaknesses Strengths The paper addresses an understudied and challenging problem and provides a non-trivial empirical advance. The proposed methods, while straightforward, are well motivated. The writing is clear for the most part. See some presentation suggestions below. Weaknesses I wouldn’t say what follow are necessarily reasons to reject but are something that can be addressed to strengthen the contributions of this paper: The description of the paper doesn’t provide a clear articulation of the key challenge or insight that is being addressed. It simply provides a description of the system and a demonstration that it provides empirical gains. It would be useful to know if the specific design choices (e.g. having a summary vector for each condition and the additional transformer layers) are indeed necessary. I appreciate the information conveyed in Figure 3. It provides some evidence already. Will it be possible to add an experiment where the reasoner is removed completely? Instead, train a T5 model to do the two tasks directly. Also, It will be useful to analyze one baseline model to Figure 3 (left side). You have a similar analysis for the ShARC dataset in Table 5. Clarity, Quality, Novelty And Reproducibility The paper is mostly clearly written. The main ideas and the method are understandable from the text with some effort. The work is of reasonable quality. The idea, even if somewhat straightforward, is will suited for the problem and represents a non-trivial advance for the specific problem.
ICLR
Title Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control Abstract Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lowerdimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control. 1 INTRODUCTION Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction. Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment. While the merits of this decomposition have been demonstrated in low-dimensional environments (Deisenroth & Rasmussen, 2011; Gal et al., 2016), scaling these methods to high-dimensional environments remains an open challenge. The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (Watter et al., 2015; Ha & Schmidhuber, 2018; Kurutach et al., 2018). This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (Watter et al., 2015; Banijamali et al., 2018; Finn et al., 2016; Chua et al., 2018; Ha & Schmidhuber, 2018; Kaiser et al., 2019; Hafner et al., 2018; Zhang et al., 2019). One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (Watter et al., 2015; Banijamali et al., 2018; Hafner et al., 2018; Zhang et al., 2019). We refer to this approach as learning controllable embedding (LCE). There have been two main approaches to this problem: 1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion (Hafner et al., 2018; Zhang et al., 2019), and 2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control (Watter et al., 2015; Banijamali et al., 2018). This can be later combined with RL for extra fine-tuning of the model and control. In this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms? We argue from an optimal control standpoint that our latent space should exhibit three properties. The first is prediction: given the ability to encode to and decode from the latent space, we expect ∗Equal contribution. Correspondence to nirlevine@google.com the process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics. The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory. Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms. Our contributions are thus as follows: (1) We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space. (2) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model. (3) To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel. We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently. (4) Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C (Watter et al., 2015) and RCE (Banijamali et al., 2018) on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature. 2 PROBLEM FORMULATION We are interested in controlling the non-linear dynamical systems of the form st+1 = fS(st, ut) +w, over the horizon T . In this definition, st ∈ S ⊆ Rns and ut ∈ U ⊆ Rnu are the state and action of the system at time step t ∈ {0, . . . , T − 1}, w is the Gaussian system noise, and fS is a smooth non-linear system dynamics. We are particularly interested in the scenario in which we only have access to the high-dimensional observation xt ∈ X ⊆ Rnx of each state st (nx ns). This scenario has application in many real-world problems, such as visual-servoing (Espiau et al., 1992), in which we only observe high-dimensional images of the environment and not its underlying state. We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = {ut}T−1t=0 , the observation sequence {xt}Tt=0 is generated by a stationary Markov process, i.e., xt+1 ∼ P (·|xt, ut), ∀t ∈ {0, . . . , T − 1}.1 A common approach to control the above dynamical system is to solve the following stochastic optimal control (SOC) problem (Shapiro et al., 2009) that minimizes expected cumulative cost: min U L(U,P, c, x0) := E [ cT (xT ) + T−1∑ t=0 ct(xt, ut) | P, x0 ] , 2 (SOC1) where ct : X ×U → R≥0 is the immediate cost function at time t, cT ∈ R≥0 is the terminal cost, and x0 is the observation at the initial state s0. Note that all immediate costs are defined in the observation space X , and are bounded by cmax > 0 and Lipschitz with constant clip > 0. For example, in visualservoing, (SOC1) can be formulated as a goal tracking problem (Ebert et al., 2018), where we control the robot to reach the goal observation xgoal, and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error E[ ∑ t ‖xt − xgoal‖2 | P, x0]. Since the observations x are high dimensional and the dynamics in the observation space P (·|xt, ut) is unknown, solving (SOC1) is often intractable. To address this issue, a class of algorithms has been recently developed that is based on learning a low-dimensional latent (embedding) space Z ⊆ Rnz (nz nx) and latent state dynamics, and performing optimal control there. This class that we refer to as learning controllable embedding (LCE) throughout the paper, include recently developed algorithms, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), and SOLAR (Zhang et al., 2019). The main idea behind the LCE approach is to learn a triplet, (i) an encoderE : X → P(Z); (ii) a dynamics in the latent space F : Z ×U → P(Z); and (iii) a decoder D : Z → P(X ). These in turn can be thought of as defining a (stochastic) mapping P̂ : X ×U → P(X ) of the form P̂ = D ◦F ◦E. We then wish to solve the SOC in latent space Z: min U,P̂ E [ L(U,F, c, z0) | E, x0 ] + λ2 √ R2(P̂ ), (SOC2) such that the solution of (SOC2), U∗2 , has similar performance to that of (SOC1), U ∗ 1 , i.e., L(U∗1 , P, c, x0) ≈ L(U∗2 , P, c, x0). In (SOC2), z0 is the initial latent state sampled from the encoder E(·|x0); c̄ : Z × U → R≥0 is the latent cost function defined as c̄t(zt, ut) =∫ ct(xt, ut)dD(xt|zt); R2(P̂ ) is a regularizer over the mapping P̂ ; and λ2 is the corresponding 1A method to ensure this Markovian assumption is by buffering observations (Mnih et al., 2013) for a number of time steps. 2See Appendix B.3 for the extension to the closed-loop MDP problem. tion SOC2 under dynamics F , and (c)(red) in equation SOC3 under dynamics P̂ . regularization parameter. We will define R2 and λ2 more precisely in Section 3. Note that the expectation in (SOC2) is over the randomness generated by the (stochastic) encoder E. 3 PCC MODEL: A CONTROL PERSPECTIVE As described in Section 2, we are primarily interested in solving (SOC1), whose states evolve under dynamics P , as shown at the bottom row of Figure 1(a) in (blue). However, because of the difficulties in solving (SOC1), mainly due to the high dimension of observations x, LCE proposes to learn a mapping P̂ by solving (SOC2) that consists of a loss function, whose states evolve under dynamics F (after an initial transition by encoder E), as depicted in Figure 1(b), and a regularization term. The role of the regularizer R2 is to account for the performance gap between (SOC1) and the loss function of (SOC2), due to the discrepancy between their evolution paths, shown in Figures 1(a)(blue) and 1(b)(green). The goal of LCE is to learn P̂ of the particular form P̂ = D ◦ F ◦ E, described in Section 2, such that the solution of (SOC2) has similar performance to that of (SOC1). In this section, we propose a principled way to select the regularizer R2 to achieve this goal. Since the exact form of (SOC2) has a direct effect on learning P̂ , designing this regularization term, in turn, provides us with a recipe (loss function) to learn the latent (embedded) space Z . In the following subsections, we show that this loss function consists of three terms that correspond to prediction, consistency, and curvature, the three ingredients of our PCC model. Note that these two SOCs evolve in two different spaces, one in the observation space X under dynamics P , and the other one in the latent space Z (after an initial transition from X to Z) under dynamics F . Unlike P and F that only operate in a single space, X and Z , respectively, P̂ can govern the evolution of the system in both X and Z (see Figure 1(c)). Therefore, any recipe to learn P̂ , and as a result the latent space Z , should have at least two terms, to guarantee that the evolution paths resulted from P̂ in X and Z are consistent with those generated by P and F . We derive these two terms, that are the prediction and consistency terms in the loss function used by our PCC model, in Sections 3.1 and 3.2, respectively. While these two terms are the result of learning P̂ in general SOC problems, in Section 3.3, we concentrate on the particular class of LLC algorithms (e.g., iLQR (Li & Todorov, 2004)) to solve SOC, and add the third term, curvature, to our recipe for learning P̂ . 3.1 PREDICTION OF THE NEXT OBSERVATION Figures 1(a)(blue) and 1(c)(red) show the transition in the observation space under P and P̂ , where xt is the current observation, and xt+1 and x̂t+1 are the next observations under these two dynamics, respectively. Instead of learning a P̂ with minimum mismatch with P in terms of some distribution norm, we propose to learn P̂ by solving the following SOC: min U,P̂ L(U, P̂ , c, x0) + λ3 √ R3(P̂ ), (SOC3) whose loss function is the same as the one in (SOC1), with the true dynamics replaced by P̂ . In Lemma 1 (see Appendix A.1, for proof), we show how to set the regularization term R3 in (SOC3), such that the control sequence resulted from solving (SOC3), U∗3 , has similar performance to the solution of (SOC1), U∗1 , i.e., L(U ∗ 1 , P, c, x0) ≈ L(U∗3 , P, c, x0). Lemma 1. Let U∗1 be a solution to (SOC1) and (U∗3 , P̂ ∗3 ) be a solution to (SOC3) with R3(P̂ ) = Ex,u [ DKL ( P (·|x, u)||P̂ (·|x, u) )] and λ3 = √ 2U · T 2cmax. (1) Then, we have L(U∗1 , P, c, x0) ≥ L(U∗3 , P, c, x0)− 2λ3 √ R3(P̂ ∗3 ). In Eq. 1, the expectation is over the state-action stationary distribution of the policy used to generate the training samples (uniformly random policy in this work), and U is the Lebesgue measure of U .3 3In the case when sampling policy is non-uniform and has no measure-zero set, 1/U is its minimum measure. 3.2 CONSISTENCY IN PREDICTION OF THE NEXT LATENT STATE In Section 3.1, we provided a recipe for learning P̂ (in form of D ◦ F ◦ E) by introducing an intermediate (SOC3) that evolves in the observation space X according to dynamics P̂ . In this section we first connect (SOC2) that operates in Z with (SOC3) that operates in X . For simplicity and without loss generality, assume the initial cost c0(x, u) is zero.4 Lemma 2 (see Appendix A.2, for proof) suggests how we shall set the regularizer in (SOC2), such that its solution performs similarly to that of (SOC3), under their corresponding dynamics models. Lemma 2. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′2(P̂ ) = Ex,u [ DKL (( E ◦ P̂ ) (·|x, u)|| ( F ◦ E ) (·|x, u) )] and λ2 = √ 2U · T 2cmax. (2) Then, we have L(U∗3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0)− 2λ2 √ R′2(P̂ ∗ 2 ) . Similar to Lemma 1, in Eq. 2, the expectation is over the state-action stationary distribution of the policy used to generate the training samples. Moreover, ( E ◦ P̂ ) (z′|x, u) = ∫ x′ E(z′|x′)dP̂ (x′|x, u) and ( F ◦E ) (z′|x, u) = ∫ z F (z′|z, u)dE(z|x) are the probability over the next latent state z′, given the current observation x and action u, in (SOC2) and (SOC3) (see the paths xt → zt → z̃t+1 and xt → zt → z̃t+1 → x̂t+1 → ẑt+1 in Figures 1(b)(green) and 1(c)(red)). Therefore R′2(P̂ ) can be interpreted as the measure of discrepancy between these models, which we term as consistency loss. Although Lemma 2 provides a recipe to learn P̂ by solving (SOC2) with the regularizer (2), unfortunately this regularizer cannot be computed from the data – that is of the form (xt, ut, xt+1) – because the first term in the DKL requires marginalizing over current and next latent states (zt and z̃t+1 in Figure 1(c)). To address this issue, we propose to use the (computable) regularizer R′′2 (P̂ ) = Ex,u,x′ [ DKL ( E(·|x′)|| ( F ◦ E ) (·|x, u) )] , (3) in which the expectation is over (x, u, x′) sampled from the training data. Corollary 1 (see Appendix A.3, for proof) bounds the performance loss resulted from using R′′2 (P̂ ) instead of R ′ 2(P̂ ), and shows that it could be still a reasonable choice. Corollary 1. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′′2 (P̂ ) and and λ2 defined by (3) and (2). Then, we have L(U ∗ 3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0) − 2λ2 √ 2R′′2 (P̂ ∗ 2 ) + 2R3(P̂ ∗ 2 ) . Lemma 1 suggests a regularizer R3 to connect the solutions of (SOC1) and (SOC3). Similarly, Corollary 1 shows that regularizer R′′2 in (3) establishes a connection between the solutions of (SOC3) and (SOC2). Putting these results together, we achieve our goal in Lemma 3 (see Appendix A.4, for proof) to design a regularizer for (SOC2), such that its solution performs similarly to that of (SOC1). Lemma 3. Let U∗1 be a solution to (SOC1) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R2(P̂ ) = 3R3(P̂ ) + 2R ′′ 2 (P̂ ) and λ2 = 2 √ U · T 2cmax, (4) where R3(P̂ ) and R′′2 (P̂ ) are defined by (1) and (3). Then, we have L(U∗1 , P, c, x0) ≥ L(U∗2 , P, c, x0)− 2λ2 √ R2(P̂ ∗2 ) . 3.3 LOCALLY-LINEAR CONTROL IN THE LATENT SPACE AND CURVATURE REGULARIZATION In Sections 3.1 and 3.2, we derived a loss function to learn the latent space Z . This loss function, that was motivated by the general SOC perspective, consists of two terms to enforce the latent space to not only predict the next observations accurately, but to be suitable for control. In this section, we focus on the class of locally-linear control (LLC) algorithms (e.g., iLQR), for solving (SOC2), and show how this choice adds a third term, that corresponds to curvature, to the regularizer of (SOC2), and as a result, to the loss function of our PCC model. The main idea in LLC algorithms is to iteratively compute an action sequence to improve the current trajectory, by linearizing the dynamics around this trajectory, and use this action sequence to generate 4With non-zero initial cost, similar results can be derived by having an additional consistency term on x0. the next trajectory (see Appendix B for more details about LLC and iLQR). This procedure implicitly assumes that the dynamics is approximately locally linear. To ensure this in (SOC2), we further restrict the dynamics P̂ and assume that it is not only of the form P̂ = D ◦ F ◦ E, but F , the latent space dynamics, has low curvature. One way to ensure this in (SOC2) is to directly impose a penalty over the curvature of the latent space transition function fZ(z, u). Assume F (z, u) = fZ(z, u) + w, where w is a Gaussian noise. Consider the following SOC problem: min U,P̂ E [L(U,F, c, z0) | E, x0] + λLLC √ R2(P̂ ) +RLLC(P̂ ) , (SOC-LLC) where R2 is defined by (4); U is optimized by a LLC algorithm, such as iLQR; RLLC(P̂ ) is given by, RLLC(P̂ ) = Ex,u [ E [ fZ(z + z, u+ u)− fZ(z, u)− (∇zfZ(z, u) · z +∇ufZ(z, u) · u)‖22 ] | E ] , (5) where = ( z, u)> ∼ N (0, δ2I), δ > 0 is a tunable parameter that characterizes the “diameter" of latent state-action space in which the latent dynamics model has low curvature. λLLC = 2 √ 2T 2cmax √ U max ( clip(1 + √ 2 log(2T/η)) √ X/2, 1 ) , where 1/X is the minimum non-zero measure of the sample distribution w.r.t. X , and 1− η ∈ [0, 1) is a probability threshold. Lemma 4 (see Appendix A.5, for proof and discussions on how δ affects LLC performance) shows that a solution of (SOC-LLC) has similar performance to a solution of (SOC1, and thus, (SOC-LLC) is a reasonable optimization problem to learn P̂ , and also the latent space Z . Lemma 4. Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action trajectory {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t)}T−1t=0 is the optimal trajectory of (SOC2). Then with proba- bility 1− η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0)− 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) . In practice, instead of solving (SOC-LLC) jointly for U and P̂ , we treat (SOC-LLC) as a bi-level optimization problem, first, solve the inner optimization problem for P̂ , i.e., P̂ ∗ ∈ arg min P̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ), (PCC-LOSS) where R′3(P̂ ) = −Ex,u,x′ [log P̂ (x′|x, u)] is the negative log-likelihood,5 and then, solve the outer optimization problem, minU L(U, F̂ ∗, c̄, z0), where P̂ ∗ = D̂∗◦F̂ ∗◦Ê∗, to obtain the optimal control sequence U∗. Solving (SOC-LLC) this way is an approximation, in general, but is justified, when the regularization parameter λLLC is large. Note that we leave the regularization parameters (λp, λc, λcur) as hyper-parameters of our algorithm, and do not use those derived in the lemmas of this section. Since the loss for learning P̂ ∗ in (PCC-LOSS) enforces (i) prediction accuracy, (ii) consistency in latent state prediction, and (iii) low curvature over fZ , through the regularizers R′3, R ′′ 2 , and RLLC, respectively, we refer to it as the prediction-consistency-curvature (PCC) loss. 4 INSTANTIATING THE PCC MODEL IN PRACTICE The PCC-Model objective in (PCC-LOSS) introduces the optimization problem minP̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ). To instantiate this model in practice, we describe P̂ = D ◦ F ◦ E as a latent variable model that factorizes as P̂ (xt+1, zt, ẑt+1 | xt, ut) = P̂ (zt | xt)P̂ (ẑt+1 | zt, ut)P̂ (xt+1 | ẑt+1). In this section, we propose a variational approximation to the intractable negative log-likelihood R′3 and batch-consistency R ′′ 2 losses, and an efficient approximation of the curvature loss RLLC. 4.1 VARIATIONAL PCC The negative log-likelihood 6 R′3 admits a variational bound via Jensen’s Inequality, R′3(P̂ ) = − log P̂ (xt+1 | xt, ut) = − logEQ(zt,ẑt+1|xt,ut,xt+1) [ P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ] ≤ −EQ(zt,ẑt+1|xt,ut,xt+1) [ log P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ] = R′3,NLE-Bound(P̂ , Q), (6) 5Since R3(P̂ ) is the sum of R′3(P̂ ) and the entropy of P , we replaced it with R′3(P̂ ) in (PCC-LOSS). 6For notation convenience, we drop the expectation over the empirical data that appears in various loss terms. which holds for any choice of recognition model Q. For simplicity, we assume the recognition model employs bottom-up inference and thus factorizes as Q(zt, ẑt+1|xt, xt+1, ut) = Q(ẑt+1|xt+1)Q(zt|ẑt+1, xt, ut). The main idea behind choosing a backward-facing model is to allow the model to learn to account for noise in the underlying dynamics. We estimate the expectations in (6) via Monte Carlo simulation. To reduce the variance of the estimator, we decompose R′3,NLE-Bound further into − EQ(ẑt+1|xt+1) [ log P̂ (xt+1|ẑt+1) ] + EQ(ẑt+1|xt+1) [ DKL ( Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt) )] −H (Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1 | zt, ut) ] , and note that the Entropy H(·) and Kullback-Leibler DKL(·‖·) terms are analytically tractable when Q is restricted to a suitably chosen variational family (i.e. in our experiments, Q(ẑt+1 | xt+1) and Q(zt | ẑt+1, xt, ut) are factorized Gaussians). The derivation is provided in Appendix C.1. Interestingly, the consistency loss R′′2 admits a similar treatment. We note that the consistency loss seeks to match the distribution of ẑt+1 | xt, ut with zt+1 | xt+1, which we represent below as R′′2 (P̂ ) = DKL ( P̂ (zt+1 | xt+1)‖P̂ (ẑt+1 | xt, ut) ) = −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 [ log P̂ (ẑt+1 | xt, ut) ] . Here, P̂ (ẑt+1 | xt, ut) is intractable due to the marginalization of zt. We employ the same procedure as in (6) to construct a tractable variational bound R′′2 (P̂ ) ≤ −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 EQ(zt|ẑt+1,xt,ut) [ log P̂ (zt, ẑt+1 | xt, ut) Q(zt | ẑt+1, xt, ut) ] . We now make the further simplifying assumption that Q(ẑt+1 | xt+1) = P̂ (ẑt+1 | xt+1). This allows us to rewrite the expression as R′′2 (P̂ ) ≤ −H(Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1 | zt, ut) ] + EQ(ẑt+1|xt+1) [ DKL(Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt)) ] = R′′2,Bound(P̂ , Q), (7) which is a subset of the terms in (6). See Appendix C.2 for a detailed derivation. 4.2 CURVATURE REGULARIZATION AND AMORTIZED GRADIENT In practice we use a variant of the curvature loss where Taylor expansions and gradients are evaluated at z̄ = z + z and ū = u+ u, RLLC(P̂ ) = E ∼N (0,δI)[‖fZ(z̄, ū)− (∇zfZ(z̄, ū) z +∇ufZ(z̄, ū) u)− fZ(z, u)‖22]. (8) When nz is large, evaluation and differentiating through the Jacobians can be slow. To circumvent this issue, the Jacobians evaluation can be amortized by treating the Jacobians as the coefficients of the best linear approximation at the evaluation point. This leads to a new amortized curvature loss RLLC-Amor(P̂ , A,B) = E ∼N (0,δI)[‖fZ(z̄, ū)− (A(z̄, ū) z +B(z̄, ū) u − fZ(z, u))‖22]. (9) where A and B are function approximators to be optimized. Intuitively, the amortized curvature loss seeks—for any given (z, u)—to find the best choice of linear approximation induced by A(z, u) and B(z, u) such that the behavior of Fµ in the neighborhood of (z, u) is approximately linear. 5 RELATION TO PREVIOUS EMBED-TO-CONTROL APPROACHES In this section, we highlight the key differences between PCC and the closest previous works, namely E2C and RCE. A key distinguishing factor is PCC’s use of a nonlinear latent dynamics model paired with an explicit curvature loss. In comparison, E2C and RCE both employed “locally-linear dynamics” of the form z ′ = A(z̄, ū)z +B(z̄, ū)u+ c(z̄, ū) where z̄ and ū are auxiliary random variables meant to be perturbations of z and u. When contrasted with (9), it is clear that neither A and B in the E2C/RCE formulation can be treated as the Jacobians of the dynamics, and hence the curvature of the dynamics is not being controlled explicitly. Furthermore, since the locally-linear dynamics are wrapped inside the maximum-likelihood estimation, both E2C and RCE conflate the two key elements prediction and curvature together. This makes controlling the stability of training much more difficult. Not only does PCC explicitly separate these two components, we are also the first to explicitly demonstrate theoretically and empirically that the curvature loss is important for iLQR. Furthermore, RCE does not incorporate PCC’s consistency loss. Note that PCC, RCE, and E2C are all Markovian encoder-transition-decoder frameworks. Under such a framework, the sole reliance on minimizing the prediction loss will result in a discrepancy between how the model is trained (maximizing the likelihood induced by encoding-transitioning-decoding) versus how it is used at test-time for control (continual transitioning in the latent space without ever decoding). By explicitly minimizing the consistency loss, PCC reduces the discrapancy between how the model is trained versus how it is used at test-time for planning. Interestingly, E2C does include a regularization term that is akin to PCC’s consistency loss. However, as noted by the authors of RCE, E2C’s maximization of pair-marginal log-likelihoods of (xt, xt+1) as opposed to the conditional likelihood of xt+1 given xt means that E2C does not properly minimize the prediction loss prescribed by the PCC framework. 6 EXPERIMENTS In this section, we compare the performance of PCC with two model-based control algorithm baselines: RCE7 (Banijamali et al., 2018) and E2C (Watter et al., 2015), as well as running a thorough ablation study on various components of PCC. The experiments are based on the following continuous control benchmark domains (see Appendix D for more descriptions): (i) Planar System, (ii) Inverted Pendulum, (iii) Cartpole, (iv) 3-link manipulator, and (v) TORCS simulator8 (Wymann et al., 2000). To generate our training and test sets, each consists of triples (xt, ut, xt+1), we: (1) sample an underlying state st and generate its corresponding observation xt, (2) sample an action ut, and (3) obtain the next state st+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ2Ins , and generate corresponding observation xt+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (st, ut) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models’ performance under various degree of noise. Each task has underlying start and goal states that are unobservable to the algorithms, instead, the algorithms have access to the corresponding start and goal observations. We apply control using the iLQR algorithm (see Appendix B), with the same cost function that was used by RCE and E2C, namely, c̄(zt, ut) = (zt − zgoal)>Q(zt − zgoal) + u>t Rut, and c̄(zT ) = (zT − zgoal)>Q(zT − zgoal), where zgoal is obtained by encoding the goal observation, and Q = κ · Inz , R = Inu9. Details of our implementations are specified in Appendix D.3. We report performance in the underlying system, specifically the percentage of time spent in the goal region10. A Reproducible Experimental Pipeline In order to measure performance reproducibility, we perform the following 2-step pipeline. For each control task and algorithm, we (1) train 10 models 7For the RCE implementation, we directly optimize the ELBO loss in Equation (16) of the paper. We also tried the approach reported in the paper on increasing the weights of the two middle terms and then annealing them to 1. However, in practice this method is sensitive to annealing schedule and has convergence issues. 8See a control demo on the TORCS simulator at https://youtu.be/GBrgALRZ2fw 9According to the definition of latent cost c̄(z, u) = D ◦ c(z, u), its quadratic approximation is given by c̄(z, u) ≈ [ z − zgoal u ]> [∇z ∇u ] D◦c|z=zgoal,u=0 + 1 2 [ z − zgoal u ]> [∇2zz ∇2zu ∇2uz ∇2uu ] D◦c|z=zgoal,u=0 [ z − zgoal u ] . Yet for simplicity, we choose the same latent cost as in RCE and E2C with fixed, tunable matrices Q and R. 10Another possible metric is the average distance to goal, which has a similar behavior. independently, and (2) solve 10 control tasks per model (we do not cherry-pick, but instead perform a total of 10× 10 = 100 control tasks). We report statistics averaged over all the tasks (in addition, we report the best performing model averaged over its 10 tasks). By adopting a principled and statistically reliable evaluation pipeline, we also address a pitfall of the compared baselines where the best model needs to be cherry picked, and training variance was not reported. Results Table 1 shows how PCC outperforms the baseline algorithms in the noiseless dynamics case by comparing means and standard deviations of the means on the different control tasks (for the case of added noise to the dynamics, which exhibits similar behavior, refer to Appendix E.1). It is important to note that for each algorithm, the performance metric averaged over all models is drastically different than that of the best model, which justifies our rationale behind using the reproducible evaluation pipeline and avoid cherry-picking when reporting. Figure 2 depicts 2 instances (randomly chosen from the 10 trained models) of the learned latent space representations on the noiseless dynamics of Planar and Inverted Pendulum tasks for PCC, RCE, and E2C models (additional representations can be found in Appendix E.2). Representations were generated by encoding observations corresponding to a uniform grid over the state space. Generally, PCC has a more interpretable representation of both Planar and Inverted Pendulum Systems than other baselines for both the noiseless dynamics case and the noisy case. Finally, in terms of computation, PCC demonstrates faster training with 64% improvement over RCE, and 2% improvement over E2C.11 Ablation Analysis On top of comparing the performance of PCC to the baselines, in order to understand the importance of each component in (PCC-LOSS), we also perform an ablation analysis on the consistency loss (with/without consistency loss) and the curvature loss (with/without curvature loss, and with/without amortization of the Jacobian terms). Table 2 shows the ablation analysis of PCC on the aforementioned tasks. From the numerical results, one can clearly see that when consistency loss is omitted, the control performance degrades. This corroborates with the theoretical results in Section 3.2, which indicates the relationship of the consistency loss and the estimation error between the next-latent dynamics prediction and the next-latent encoding. This further implies that as the consistency term vanishes, the gap between control objective function and the model training loss is widened, due to the accumulation of state estimation error. The control performance also decreases when one removes the curvature loss. This is mainly attributed to the error between the iLQR control algorithm and (SOC2). Although the latent state dynamics model is parameterized with neural networks, which are smooth, without enforcing the curvature loss term the norm of the Hessian (curvature) might still be high. This also confirms with the analysis in Section 3.3 about sub-optimality performance and curvature of latent dynamics. Finally, we observe that the performance of models trained without amortized curvature loss are slightly better than with their amortized counterpart, however, since the amortized curvature loss does not require computing gradient of the latent dynamics (which means that in stochastic optimization one does not need to estimate its Hessian), we observe relative speed-ups in model training with the amortized version (speed-up of 6%, 9%, and 15% for Planar System, Inverted Pendulum, and Cartpole, respectively). 7 CONCLUSION In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and 11Comparison jobs were deployed on the Planar system using Nvidia TITAN Xp GPU. the embedded observations. Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable. All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller. We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics. A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods. A TECHNICAL PROOFS OF SECTION 3 A.1 PROOF OF LEMMA 1 Following analogous derivations of Lemma 11 in Petrik et al. (2016) (for the case of finite-horizon MDPs), for the case of finite-horizon MDPs, one has the following chain of inequalities for any given control sequence {ut}T−1t=0 and initial observation x0: |L(U, P̂ , x0)− L(U,P, x0)| = ∣∣∣∣∣E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) | P̂ , x0 ] − E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) |P, x0 ]∣∣∣∣∣ ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] , where DTV is the total variation distance of two distributions. The first inequality is based on the result of the above lemma, the second inequality is based on Pinsker’s inequality (Ordentlich & Weinberger, 2005), and the third inequality is based on Jensen’s inequality (Boyd & Vandenberghe, 2004) of √ (·) function. Now consider the expected cumulative KL cost: E [ 1 T ∑T−1 t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] with respect to some arbitrary control action sequence {ut}T−1t=0 . Notice that this arbitrary action sequence can always be expressed in form of deterministic policy ut = π′(xt, t) with some nonstationary state-action mapping π′. Therefore, this KL cost can be written as: E [ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, π, x0 ] =E [ 1 T T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut))dπ′(ut|xt, t) | P, x0 ] =E [ 1 T T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut)) · dπ′(ut|xt, t) dU(ut) · dU(ut) | P, x0 ] ≤U · Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] , (10) where the expectation is taken over the state-action occupation measure 1T ∑T−1 t=0 P(xt = x, ut = u|x0, U) of the finite-horizon problem that is induced by data-sampling policy U . The last inequality is due to change of measures in policy, and the last inequality is due to the facts that (i) π is a deterministic policy, (ii) dU(ut) is a sampling policy with lebesgue measure 1/U over all control actions, (iii) the following bounds for importance sampling factor holds: ∣∣∣dπ′(ut|xt,t)dU(ut) ∣∣∣ ≤ U . To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model P̂ and control sequence U : |L(U, P̂ , x0)− L(U,P, x0)| ≤ √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] . (11) For the second part of the proof, consider the solution of (SOC3), namely (U∗3 , P̂ ∗ 3 ). Using the optimality condition of this problem one obtains the following inequality: L(U∗3 , P̂ ∗ 3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≤L(U∗1 , P̂ ∗3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] . (12) Using the results in (11) and (12), one can then show the following chain of inequalities: L(U∗1 , P, c, x0) ≥L(U∗1 , P̂ ∗3 , c, x0)− √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] =L(U∗1 , P̂ ∗ 3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≥L(U∗3 , P̂ ∗3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≥L(U∗3 , P, c, x0)− 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] , (13) where U∗1 is the optimizer of (SOC1) and (U ∗ 3 , P̂ ∗ 3 ) is the optimizer of (SOC3). Therefore by letting λ3 = √ 2T 2 · cmaxU and R3(P̂ ) = Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] and by combining all of the above arguments, the proof of the above lemma is completed. A.2 PROOF OF LEMMA 2 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 , and any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P̂ ,x0] = ∫ x0:t−1∈X t t−1∏ k=1 dP̂ (xk|xk−1, uk−1)·∫ zt∈Z ∫ z′t−1∈Z dE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1) ∫ xt∈X dD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) . Now consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] = ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1∈Z dF (zt−1|z′t−2, ut−2)( c̄(zt−1, ut−1) + ∫ xt−1∈X dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) ) ≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ zt−2∈Z dE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) ) + cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) By continuing the above expansion, one can show that∣∣∣E [L(U,F, c, z0) | E, x0]− L(U, P̂ , c, x0)∣∣∣ ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] , where the last inequality is based on Jensen’s inequality of √ (·) function. For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2): L(U∗3 , P̂ ∗ 3 , c, x0) ≥E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0]− √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] =E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] ≥L(U∗2 , P̂ ∗2 , c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2 · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] ︸ ︷︷ ︸ R′′2 (P̂ ∗ 2 ) , (14) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A.3 PROOF OF COROLLARY 1 To start with, the total-variation distance DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) can be bounded by the following inequality using triangle inequality: DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤DTV (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) +DTV (∫ x′∈X dP (x′|x, u)E(·|x′)|| ∫ x′∈X dP̂ (x′|x, u)E(·|x′) ) ≤DTV (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) +DTV ( P (·|x, u)||P̂ (·|x, u) ) where the second inequality follows from the convexity property of the DTV-norm (w.r.t. convex weights E(·|x′), ∀x′). Then by Pinsker’s inequality, one obtains the following inequality: DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) + √ 2KL ( P (·|x, u)||P̂ (·|x, u) ) . (15) We now analyze the batch consistency regularizer: R′′2 (P̂ ) = Ex,u,x′ [KL(E(·|x′)||(F ◦ E)(·|x, u))] and connect it with the inequality in (15). Using Jensen’s inequality of convex function x log x, for any observation-action pair (x, u) sampled from Uτ , one can show that∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′)) . (16) Therefore, for any observation-control pair (x, u) the following inequality holds: KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) = ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) − ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′))− ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) =KL(E(·|x′)||(F ◦ E)(·|x, u)) (17) By taking expectation over (x, u) one can show that Ex,u [ KL( ∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u)) ] is the lower bound of the batch consistency regularizer. Therefore, the above arguments imply that DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2 √ R′′2 (P̂ ) +R3(P̂ ). (18) The inequality is based on the property that √ a+ √ b ≤ √ 2 √ a+ b. Equipped with the above additional results, the rest of the proof on the performance bound follows directly from the results from Lemma 2, in which here we further upper-bound DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) , when P̂ = P̂ ∗2 . A.4 PROOF OF LEMMA 3 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 and for any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P, x0] = cmax · ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−1, ut−1)||P̂ (·|xt−1, ut−1)) + ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1) ∫ zt∈Z ∫ z′t−1∈Z dE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1) ∫ xt∈X dD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) . Now consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P, x0] = ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1 dF (zt−1|z′t−2, ut−2)·( c̄(zt−1, ut−1) + ∫ xt−1 dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) ) + cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j)) ≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ zt−2∈Z dE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) ) + cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j)) + cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) . Continuing the above expansion, one can show that |E [L(U,F, c, z0) | E, x0]− L(U,P, x0)| ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) +DTV( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 3KL(P (·|xt, ut)||P̂ (·|xt, ut)) + 2KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] , where the last inequality is based on the fact that √ a+ √ b ≤ √ 2 √ a+ b and is based on Jensen’s inequality of √ (·) function. For the second part of the proof, following similar arguments from Lemma 2, one can show the following inequality for the solution of (SOC3) and (SOC2): L(U∗1 , P, c, x0) ≥E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0]− √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) =E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) − 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) − 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) ≥L(U∗2 , P, c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2 · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ), (19) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A.5 PROOF OF LEMMA 4 A Recap of the Result: Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action pair {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t}T−1t=0 is the optimal trajectory of problem (SOC2). Then with probability 1 − η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0) − 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) . Discussions of the effect of δ on LLC Performance: The result of this lemma shows that when the nominal state and actions are δ-close to the optimal trajectory of (SOC2), i.e., at each time step (zt,ut) is a sample from the Gaussian distribution centered at (z∗2,t, u ∗ 2,t) with standard deviation δ, then one can obtain a performance bound of LLC algorithm that is in terms of the regularization loss RLLC. To quantify the above condition, one can use Mahalanobis distance (De Maesschalck et al., 2000) to measure the distance of (zt,ut) to distribution N ((z∗2,t, u∗2,t), δ2I), i.e., we want to check for the condition: ‖(zt,ut)− (z∗2,t, u∗2,t)‖ δ ≤ ′, ∀t, for any arbitrary error tolerance ′ > 0. While we cannot verify the condition without knowing the optimal trajectory {(z∗2,t, u∗2,t)}T−1t=0 , the above condition still offers some insights in choosing the parameter δ based on the trade-off of designing nominal trajectory {(zt,ut)}T−1t=0 and optimizing RLLC. When δ is large, the low-curvature regularization imposed by the RLLC regularizer will cover a large portion of the state-action space. In the extreme case when δ →∞, RLLC can be viewed as a regularizer that enforces global linearity. Here the trade-off is that the loss RLLC is generally higher, which in turn degrades the performance bound of the LLC control algorithm in Lemma 4. On the other hand, when δ is small the low-curvature regularization in RLLC only covers a smaller region of the latent state-action space, and thus the loss associated with this term is generally lower (which provides a tighter performance bound in Lemma 4). However the performance result will only hold when (zt,ut) happens to be close to (z∗2,t, u ∗ 2,t) at each time-step t ∈ {0, . . . , T − 1}. Proof: For simplicity, we will focus on analyzing the noiseless case when the dynamics is deterministic (i.e., Σw = 0). Extending the following analysis for the case of non-deterministic dynamics should be straight-forward. First, consider any arbitrary latent state-action pair (z, u), such that the corresponding nominal state-action pair (z,u) is constructed by z = z− δz, u = u− δu, where (δz, δu) is sampled from the Gaussian distribution N (0, δ2I). (The random vectors are denoted as (δz′, δu′)) By the two-tailed Bernstein’s inequality (Murphy, 2012), for any arbitrarily given η ∈ (0, 1] one has the following inequality with probability 1− η: |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| ≤ √ 2 log(2/η) √ V(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)] + ∣∣E(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)]∣∣ ≤(1 + √ 2 log(2/η)) ( E(δz′,δu′)∼N (0,δ2I) [ ‖fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)‖2 ]︸ ︷︷ ︸ RLLC(P̂ |z,u) )1/2 . The second inequality is due to the basic fact that variance is less than second-order moment of a random variable. On the other hand, at each time step t ∈ {0, . . . , T −1} by the Lipschitz property of the immediate cost, the value function Vt(z) = minUt:T−1 E [ cT (zT ) + ∑T−1 τ=t cτ (zτ , uτ ) | zt = z ] is also Lipchitz with constant (T − t+ 1)clip. Using the Lipschitz property of Vt+1, for any (z, u) and (δz, δu), such that (z,u) = (z − δz, u− δu), one has the following property: |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z, u))| ≤(T − t)clip · |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| , (20) Therefore, at any arbitrary state-action pair (z̃, ũ), for z = z − δz, and u = ũ− δu with Gaussian sample (δz, δu) ∼ N (0, δ2I), the following inequality on the value function holds w.p. 1− η: Vt+1(fZ(z̃, ũ)) ≥ Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ), which further implies ct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) ≥ct(z̃, ũ) + Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ), Now let ũ∗ be the optimal control w.r.t. Bellman operator Tt[Vt+1](z̃) at any latent state z̃. Based on the assumption of this lemma, at each state z̃ the nominal latent state-action pair (z,u) is generated by perturbing (z̃, ũ∗) with Gaussian sample (δz, δu) ∼ N (0, δ2I) that is in form of z = z̃ − δz, u = ũ− δu. Then by the above arguments the following chain of inequalities holds w.p. 1− η: Tt[Vt+1](z̃) := min ũ ct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) =ct(z̃, ũ ∗) + Vt+1(fZ(z̃, ũ ∗)) ≥ct(z̃, ũ∗) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z̃, ũ∗))| ≥ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u) ≥min δu ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u) (21) Recall the LLC loss function is given by RLLC(P̂ ) = Ex,u [ E [ RLLC(P̂ |z, u) | z ] | E ] . Also consider the Bellman operator w.r.t. latent SOC: Tt[V ](z) = minu ct(z, u) + V (fZ(z, u)), and the Bellman operator w.r.t. LLC: Tt,LLC[V ](z) = minδu ct(z, δu+ u) + V (fZ(z,u) +A(z,u)δz + B(z,u)δu). Utilizing these definitions, the inequality in (21) can be further expressed as Tt[Vt+1](z̃) ≥Tt,LLC[Vt+1](z̃)− (T − t)clipcmax(1 + √ 2 log(2/η)) √ UX √ RLLC(P̂ ), (22) This inequality is due to the fact that all latent states are generated by the encoding observations, i.e., z ∼ E(·|x), and thus by following analogous arguments as in the proof of Lemma 1, one has max z,u RLLC(P̂ |z, u) ≤ UXEx,u [ E [ RLLC(P̂ |z, u) | z ] | E ] = UXRLLC(P̂ ). Therefore, based on the dynamic programming result that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in Bertsekas (1995)), the above inequality implies the following bound in the value function, w.p. 1− η: min U,P̂ L(U,F, c, z0) ≥L(U∗LLC, P̂ ∗LLC, c, z0)− T−1∑ t=1 (T − t) · clipcmax · T · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC) ≥L(U∗LLC, P̂ ∗LLC, c, z0)− T 2 · clipcmax · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC). (23) Notice that here we replace η in the result in (22) with η/T . In order to prove (23), we utilize (22) for each t ∈ {0, . . . , T − 1}, and this replacement is the result of applying the Union Probability bound (Murphy, 2012) (to ensure (23) holds with probability 1− η). Therefore the proof is completed by combining the above result with that in Lemma 3. B THE LATENT SPACE ILQR ALGORITHM B.1 PLANNING IN THE LATENT SPACE (HIGH-LEVEL DESCRIPTION) We follow the same control scheme as in Banijamali et al. (2018). Namely, we use the iLQR (Li & Todorov, 2004) solver to plan in the latent space. Given a start observation xstart and a goal observation xgoal, corresponding to underlying states {sstart, sgoal}, we encode the observations to retrieve zstart and zgoal. Then, the procedure goes as follows: we initialize a random trajectory (sequence of actions), feed it to the iLQR solver and apply the first action from the trajectory the solver outputs. We observe the next observation returned from the system (closed-loop control), and feed the updated trajectory to the iLQR solver. This procedure continues until the it reaches the end of the problem horizon. We use a receding window approach, where at every planning step the solver only optimizes for a fixed length of actions sequence, independent of the problem horizon. B.2 DETAILS ABOUT ILQR IN THE LATENT SPACE Consider the latent state SOC problem min U E [ cT (zT ) + T−1∑ t=0 ct(zt, ut) | z0 ] . At each time instance t ∈ {0, . . . , T} the value function of this problem is given by VT (z) = cT (z), Vt(z) = min Ut:T−1
1. What is the main contribution of the paper regarding low-dimensional representations for control purposes? 2. What are the strengths of the proposed PCC-Loss function and variational PCC method? 3. What are the limitations of the paper, particularly in terms of practical applications and comparisons with other methods? 4. How does the reviewer assess the technical quality of the theoretical part and the gap between theory and implementation? 5. What are some minor comments and questions from the reviewer regarding direct control approaches, curvature principles, Markovian assumptions, and justification?
Review
Review This paper considers learning low-dimensional representations from high-dimensional observations for control purposes. The authors extend the E2C framework by introducing the new PCC-Loss function. This new loss function aims to reflect the prediction in the observation space, the consistency between latent and observation dynamics, and the low curvature in the latent dynamics. The low curvature term is used to bias the latent dynamics towards models that can be better approximated as locally linear models. The authors provide theory (error bounds) to justify their proposed PCC-Loss function. Then variational PCC is developed to make the algorithm tractable. The proposed method is evaluated in 5 different simulated tasks and compared with the original E2C method and the RCE method. The paper is well-written. Pros: - The idea in this paper is quite original. The three principles used to formulate the loss function provide some new insights. - The authors have proposed a theory to justify the use of their loss function. The technical quality of this part seems solid. - Simulations have been used to show that the proposed PCC method outperforms E2C and RCE. -The paper is well written. Cons: - The tasks in this paper are not that complicated. It is unclear whether the proposed method outperforms other model-based RL methods such as Solar and DSAE for practical robotic applications. More comparisons are needed. - It is also not that clear why one wants to SOC3 to be close to SOC1 in the first place. It seems the true optimization problem should be posed on the space of the original state s. SOC1 is just a surrogate problem for the original problem. - There seems to be a gap between the proposed theory and the algorithm implementation. This makes the theory part less useful. Overall, I think the idea in this paper is interesting. The authors have made a serious effort in coming up principles for model-based RL control. But at this moment it is not that convincing the proposed method will be the best model-based RL method for practical robotic applications. If the authors can address my comments, I will be willing to increase my score. Minor Comments: - It seems that for the task the authors have tested their method, it is not that difficult to directly estimate the state. Am I correct here? Can the authors make a comment on this? How to compare their approach and a more direct control approach using estimation of state s? - I have never seen the curvature principle in any control papers. Any control reference on why this is a good principle? It seems that the linearization works well when the control inputs are around the reference points. Does the curvature really matter that much for ILQR to work? - How to justify the Markovian assumption on x? Just by observation or there is a more principle way to test this assumption on the buffered images? ==================================================== Post-Rebuttal: After reading the authors' response, I am changing my score to weak accept. Lemma 4 is nice. I have not seen anything similar to this in the controls literature before. The authors have addressed most of my concerns. I still have a few comments for preparing the final version of this paper. 1. I still don't see why SOC1 is the "original problem." Yes, it is assumed that the true state cannot be directly observed. But if the observations are Markov eventually, then some estimated version of the states can be obtained, right? I think treating SOC1 as the original problem is one possible way of doing things and clearly the authors have built a principled framework for doing things in this way. But treating SOC1 as the original problem seems not the only way of doing things. I hope the authors can clarify this and do not oversell the proposed approach. 2. I think it is still worth comparing SOLAR and PCC empirically. This will help the readers to choose algorithms when they need. 3. The comment on the verification of the Markov assumption is hand-waving. The authors said "A simple test would be to see if a control algorithm with the Markovian assumption works well with our representation or not." Does this mean that the users will not be able to verify this assumption before using the proposed approach to obtain controllers? It will be helpful if the authors can explain this step for one specific example in details.
ICLR
Title Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control Abstract Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lowerdimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control. 1 INTRODUCTION Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction. Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment. While the merits of this decomposition have been demonstrated in low-dimensional environments (Deisenroth & Rasmussen, 2011; Gal et al., 2016), scaling these methods to high-dimensional environments remains an open challenge. The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (Watter et al., 2015; Ha & Schmidhuber, 2018; Kurutach et al., 2018). This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (Watter et al., 2015; Banijamali et al., 2018; Finn et al., 2016; Chua et al., 2018; Ha & Schmidhuber, 2018; Kaiser et al., 2019; Hafner et al., 2018; Zhang et al., 2019). One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (Watter et al., 2015; Banijamali et al., 2018; Hafner et al., 2018; Zhang et al., 2019). We refer to this approach as learning controllable embedding (LCE). There have been two main approaches to this problem: 1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion (Hafner et al., 2018; Zhang et al., 2019), and 2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control (Watter et al., 2015; Banijamali et al., 2018). This can be later combined with RL for extra fine-tuning of the model and control. In this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms? We argue from an optimal control standpoint that our latent space should exhibit three properties. The first is prediction: given the ability to encode to and decode from the latent space, we expect ∗Equal contribution. Correspondence to nirlevine@google.com the process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics. The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory. Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms. Our contributions are thus as follows: (1) We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space. (2) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model. (3) To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel. We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently. (4) Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C (Watter et al., 2015) and RCE (Banijamali et al., 2018) on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature. 2 PROBLEM FORMULATION We are interested in controlling the non-linear dynamical systems of the form st+1 = fS(st, ut) +w, over the horizon T . In this definition, st ∈ S ⊆ Rns and ut ∈ U ⊆ Rnu are the state and action of the system at time step t ∈ {0, . . . , T − 1}, w is the Gaussian system noise, and fS is a smooth non-linear system dynamics. We are particularly interested in the scenario in which we only have access to the high-dimensional observation xt ∈ X ⊆ Rnx of each state st (nx ns). This scenario has application in many real-world problems, such as visual-servoing (Espiau et al., 1992), in which we only observe high-dimensional images of the environment and not its underlying state. We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = {ut}T−1t=0 , the observation sequence {xt}Tt=0 is generated by a stationary Markov process, i.e., xt+1 ∼ P (·|xt, ut), ∀t ∈ {0, . . . , T − 1}.1 A common approach to control the above dynamical system is to solve the following stochastic optimal control (SOC) problem (Shapiro et al., 2009) that minimizes expected cumulative cost: min U L(U,P, c, x0) := E [ cT (xT ) + T−1∑ t=0 ct(xt, ut) | P, x0 ] , 2 (SOC1) where ct : X ×U → R≥0 is the immediate cost function at time t, cT ∈ R≥0 is the terminal cost, and x0 is the observation at the initial state s0. Note that all immediate costs are defined in the observation space X , and are bounded by cmax > 0 and Lipschitz with constant clip > 0. For example, in visualservoing, (SOC1) can be formulated as a goal tracking problem (Ebert et al., 2018), where we control the robot to reach the goal observation xgoal, and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error E[ ∑ t ‖xt − xgoal‖2 | P, x0]. Since the observations x are high dimensional and the dynamics in the observation space P (·|xt, ut) is unknown, solving (SOC1) is often intractable. To address this issue, a class of algorithms has been recently developed that is based on learning a low-dimensional latent (embedding) space Z ⊆ Rnz (nz nx) and latent state dynamics, and performing optimal control there. This class that we refer to as learning controllable embedding (LCE) throughout the paper, include recently developed algorithms, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), and SOLAR (Zhang et al., 2019). The main idea behind the LCE approach is to learn a triplet, (i) an encoderE : X → P(Z); (ii) a dynamics in the latent space F : Z ×U → P(Z); and (iii) a decoder D : Z → P(X ). These in turn can be thought of as defining a (stochastic) mapping P̂ : X ×U → P(X ) of the form P̂ = D ◦F ◦E. We then wish to solve the SOC in latent space Z: min U,P̂ E [ L(U,F, c, z0) | E, x0 ] + λ2 √ R2(P̂ ), (SOC2) such that the solution of (SOC2), U∗2 , has similar performance to that of (SOC1), U ∗ 1 , i.e., L(U∗1 , P, c, x0) ≈ L(U∗2 , P, c, x0). In (SOC2), z0 is the initial latent state sampled from the encoder E(·|x0); c̄ : Z × U → R≥0 is the latent cost function defined as c̄t(zt, ut) =∫ ct(xt, ut)dD(xt|zt); R2(P̂ ) is a regularizer over the mapping P̂ ; and λ2 is the corresponding 1A method to ensure this Markovian assumption is by buffering observations (Mnih et al., 2013) for a number of time steps. 2See Appendix B.3 for the extension to the closed-loop MDP problem. tion SOC2 under dynamics F , and (c)(red) in equation SOC3 under dynamics P̂ . regularization parameter. We will define R2 and λ2 more precisely in Section 3. Note that the expectation in (SOC2) is over the randomness generated by the (stochastic) encoder E. 3 PCC MODEL: A CONTROL PERSPECTIVE As described in Section 2, we are primarily interested in solving (SOC1), whose states evolve under dynamics P , as shown at the bottom row of Figure 1(a) in (blue). However, because of the difficulties in solving (SOC1), mainly due to the high dimension of observations x, LCE proposes to learn a mapping P̂ by solving (SOC2) that consists of a loss function, whose states evolve under dynamics F (after an initial transition by encoder E), as depicted in Figure 1(b), and a regularization term. The role of the regularizer R2 is to account for the performance gap between (SOC1) and the loss function of (SOC2), due to the discrepancy between their evolution paths, shown in Figures 1(a)(blue) and 1(b)(green). The goal of LCE is to learn P̂ of the particular form P̂ = D ◦ F ◦ E, described in Section 2, such that the solution of (SOC2) has similar performance to that of (SOC1). In this section, we propose a principled way to select the regularizer R2 to achieve this goal. Since the exact form of (SOC2) has a direct effect on learning P̂ , designing this regularization term, in turn, provides us with a recipe (loss function) to learn the latent (embedded) space Z . In the following subsections, we show that this loss function consists of three terms that correspond to prediction, consistency, and curvature, the three ingredients of our PCC model. Note that these two SOCs evolve in two different spaces, one in the observation space X under dynamics P , and the other one in the latent space Z (after an initial transition from X to Z) under dynamics F . Unlike P and F that only operate in a single space, X and Z , respectively, P̂ can govern the evolution of the system in both X and Z (see Figure 1(c)). Therefore, any recipe to learn P̂ , and as a result the latent space Z , should have at least two terms, to guarantee that the evolution paths resulted from P̂ in X and Z are consistent with those generated by P and F . We derive these two terms, that are the prediction and consistency terms in the loss function used by our PCC model, in Sections 3.1 and 3.2, respectively. While these two terms are the result of learning P̂ in general SOC problems, in Section 3.3, we concentrate on the particular class of LLC algorithms (e.g., iLQR (Li & Todorov, 2004)) to solve SOC, and add the third term, curvature, to our recipe for learning P̂ . 3.1 PREDICTION OF THE NEXT OBSERVATION Figures 1(a)(blue) and 1(c)(red) show the transition in the observation space under P and P̂ , where xt is the current observation, and xt+1 and x̂t+1 are the next observations under these two dynamics, respectively. Instead of learning a P̂ with minimum mismatch with P in terms of some distribution norm, we propose to learn P̂ by solving the following SOC: min U,P̂ L(U, P̂ , c, x0) + λ3 √ R3(P̂ ), (SOC3) whose loss function is the same as the one in (SOC1), with the true dynamics replaced by P̂ . In Lemma 1 (see Appendix A.1, for proof), we show how to set the regularization term R3 in (SOC3), such that the control sequence resulted from solving (SOC3), U∗3 , has similar performance to the solution of (SOC1), U∗1 , i.e., L(U ∗ 1 , P, c, x0) ≈ L(U∗3 , P, c, x0). Lemma 1. Let U∗1 be a solution to (SOC1) and (U∗3 , P̂ ∗3 ) be a solution to (SOC3) with R3(P̂ ) = Ex,u [ DKL ( P (·|x, u)||P̂ (·|x, u) )] and λ3 = √ 2U · T 2cmax. (1) Then, we have L(U∗1 , P, c, x0) ≥ L(U∗3 , P, c, x0)− 2λ3 √ R3(P̂ ∗3 ). In Eq. 1, the expectation is over the state-action stationary distribution of the policy used to generate the training samples (uniformly random policy in this work), and U is the Lebesgue measure of U .3 3In the case when sampling policy is non-uniform and has no measure-zero set, 1/U is its minimum measure. 3.2 CONSISTENCY IN PREDICTION OF THE NEXT LATENT STATE In Section 3.1, we provided a recipe for learning P̂ (in form of D ◦ F ◦ E) by introducing an intermediate (SOC3) that evolves in the observation space X according to dynamics P̂ . In this section we first connect (SOC2) that operates in Z with (SOC3) that operates in X . For simplicity and without loss generality, assume the initial cost c0(x, u) is zero.4 Lemma 2 (see Appendix A.2, for proof) suggests how we shall set the regularizer in (SOC2), such that its solution performs similarly to that of (SOC3), under their corresponding dynamics models. Lemma 2. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′2(P̂ ) = Ex,u [ DKL (( E ◦ P̂ ) (·|x, u)|| ( F ◦ E ) (·|x, u) )] and λ2 = √ 2U · T 2cmax. (2) Then, we have L(U∗3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0)− 2λ2 √ R′2(P̂ ∗ 2 ) . Similar to Lemma 1, in Eq. 2, the expectation is over the state-action stationary distribution of the policy used to generate the training samples. Moreover, ( E ◦ P̂ ) (z′|x, u) = ∫ x′ E(z′|x′)dP̂ (x′|x, u) and ( F ◦E ) (z′|x, u) = ∫ z F (z′|z, u)dE(z|x) are the probability over the next latent state z′, given the current observation x and action u, in (SOC2) and (SOC3) (see the paths xt → zt → z̃t+1 and xt → zt → z̃t+1 → x̂t+1 → ẑt+1 in Figures 1(b)(green) and 1(c)(red)). Therefore R′2(P̂ ) can be interpreted as the measure of discrepancy between these models, which we term as consistency loss. Although Lemma 2 provides a recipe to learn P̂ by solving (SOC2) with the regularizer (2), unfortunately this regularizer cannot be computed from the data – that is of the form (xt, ut, xt+1) – because the first term in the DKL requires marginalizing over current and next latent states (zt and z̃t+1 in Figure 1(c)). To address this issue, we propose to use the (computable) regularizer R′′2 (P̂ ) = Ex,u,x′ [ DKL ( E(·|x′)|| ( F ◦ E ) (·|x, u) )] , (3) in which the expectation is over (x, u, x′) sampled from the training data. Corollary 1 (see Appendix A.3, for proof) bounds the performance loss resulted from using R′′2 (P̂ ) instead of R ′ 2(P̂ ), and shows that it could be still a reasonable choice. Corollary 1. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′′2 (P̂ ) and and λ2 defined by (3) and (2). Then, we have L(U ∗ 3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0) − 2λ2 √ 2R′′2 (P̂ ∗ 2 ) + 2R3(P̂ ∗ 2 ) . Lemma 1 suggests a regularizer R3 to connect the solutions of (SOC1) and (SOC3). Similarly, Corollary 1 shows that regularizer R′′2 in (3) establishes a connection between the solutions of (SOC3) and (SOC2). Putting these results together, we achieve our goal in Lemma 3 (see Appendix A.4, for proof) to design a regularizer for (SOC2), such that its solution performs similarly to that of (SOC1). Lemma 3. Let U∗1 be a solution to (SOC1) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R2(P̂ ) = 3R3(P̂ ) + 2R ′′ 2 (P̂ ) and λ2 = 2 √ U · T 2cmax, (4) where R3(P̂ ) and R′′2 (P̂ ) are defined by (1) and (3). Then, we have L(U∗1 , P, c, x0) ≥ L(U∗2 , P, c, x0)− 2λ2 √ R2(P̂ ∗2 ) . 3.3 LOCALLY-LINEAR CONTROL IN THE LATENT SPACE AND CURVATURE REGULARIZATION In Sections 3.1 and 3.2, we derived a loss function to learn the latent space Z . This loss function, that was motivated by the general SOC perspective, consists of two terms to enforce the latent space to not only predict the next observations accurately, but to be suitable for control. In this section, we focus on the class of locally-linear control (LLC) algorithms (e.g., iLQR), for solving (SOC2), and show how this choice adds a third term, that corresponds to curvature, to the regularizer of (SOC2), and as a result, to the loss function of our PCC model. The main idea in LLC algorithms is to iteratively compute an action sequence to improve the current trajectory, by linearizing the dynamics around this trajectory, and use this action sequence to generate 4With non-zero initial cost, similar results can be derived by having an additional consistency term on x0. the next trajectory (see Appendix B for more details about LLC and iLQR). This procedure implicitly assumes that the dynamics is approximately locally linear. To ensure this in (SOC2), we further restrict the dynamics P̂ and assume that it is not only of the form P̂ = D ◦ F ◦ E, but F , the latent space dynamics, has low curvature. One way to ensure this in (SOC2) is to directly impose a penalty over the curvature of the latent space transition function fZ(z, u). Assume F (z, u) = fZ(z, u) + w, where w is a Gaussian noise. Consider the following SOC problem: min U,P̂ E [L(U,F, c, z0) | E, x0] + λLLC √ R2(P̂ ) +RLLC(P̂ ) , (SOC-LLC) where R2 is defined by (4); U is optimized by a LLC algorithm, such as iLQR; RLLC(P̂ ) is given by, RLLC(P̂ ) = Ex,u [ E [ fZ(z + z, u+ u)− fZ(z, u)− (∇zfZ(z, u) · z +∇ufZ(z, u) · u)‖22 ] | E ] , (5) where = ( z, u)> ∼ N (0, δ2I), δ > 0 is a tunable parameter that characterizes the “diameter" of latent state-action space in which the latent dynamics model has low curvature. λLLC = 2 √ 2T 2cmax √ U max ( clip(1 + √ 2 log(2T/η)) √ X/2, 1 ) , where 1/X is the minimum non-zero measure of the sample distribution w.r.t. X , and 1− η ∈ [0, 1) is a probability threshold. Lemma 4 (see Appendix A.5, for proof and discussions on how δ affects LLC performance) shows that a solution of (SOC-LLC) has similar performance to a solution of (SOC1, and thus, (SOC-LLC) is a reasonable optimization problem to learn P̂ , and also the latent space Z . Lemma 4. Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action trajectory {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t)}T−1t=0 is the optimal trajectory of (SOC2). Then with proba- bility 1− η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0)− 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) . In practice, instead of solving (SOC-LLC) jointly for U and P̂ , we treat (SOC-LLC) as a bi-level optimization problem, first, solve the inner optimization problem for P̂ , i.e., P̂ ∗ ∈ arg min P̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ), (PCC-LOSS) where R′3(P̂ ) = −Ex,u,x′ [log P̂ (x′|x, u)] is the negative log-likelihood,5 and then, solve the outer optimization problem, minU L(U, F̂ ∗, c̄, z0), where P̂ ∗ = D̂∗◦F̂ ∗◦Ê∗, to obtain the optimal control sequence U∗. Solving (SOC-LLC) this way is an approximation, in general, but is justified, when the regularization parameter λLLC is large. Note that we leave the regularization parameters (λp, λc, λcur) as hyper-parameters of our algorithm, and do not use those derived in the lemmas of this section. Since the loss for learning P̂ ∗ in (PCC-LOSS) enforces (i) prediction accuracy, (ii) consistency in latent state prediction, and (iii) low curvature over fZ , through the regularizers R′3, R ′′ 2 , and RLLC, respectively, we refer to it as the prediction-consistency-curvature (PCC) loss. 4 INSTANTIATING THE PCC MODEL IN PRACTICE The PCC-Model objective in (PCC-LOSS) introduces the optimization problem minP̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ). To instantiate this model in practice, we describe P̂ = D ◦ F ◦ E as a latent variable model that factorizes as P̂ (xt+1, zt, ẑt+1 | xt, ut) = P̂ (zt | xt)P̂ (ẑt+1 | zt, ut)P̂ (xt+1 | ẑt+1). In this section, we propose a variational approximation to the intractable negative log-likelihood R′3 and batch-consistency R ′′ 2 losses, and an efficient approximation of the curvature loss RLLC. 4.1 VARIATIONAL PCC The negative log-likelihood 6 R′3 admits a variational bound via Jensen’s Inequality, R′3(P̂ ) = − log P̂ (xt+1 | xt, ut) = − logEQ(zt,ẑt+1|xt,ut,xt+1) [ P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ] ≤ −EQ(zt,ẑt+1|xt,ut,xt+1) [ log P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ] = R′3,NLE-Bound(P̂ , Q), (6) 5Since R3(P̂ ) is the sum of R′3(P̂ ) and the entropy of P , we replaced it with R′3(P̂ ) in (PCC-LOSS). 6For notation convenience, we drop the expectation over the empirical data that appears in various loss terms. which holds for any choice of recognition model Q. For simplicity, we assume the recognition model employs bottom-up inference and thus factorizes as Q(zt, ẑt+1|xt, xt+1, ut) = Q(ẑt+1|xt+1)Q(zt|ẑt+1, xt, ut). The main idea behind choosing a backward-facing model is to allow the model to learn to account for noise in the underlying dynamics. We estimate the expectations in (6) via Monte Carlo simulation. To reduce the variance of the estimator, we decompose R′3,NLE-Bound further into − EQ(ẑt+1|xt+1) [ log P̂ (xt+1|ẑt+1) ] + EQ(ẑt+1|xt+1) [ DKL ( Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt) )] −H (Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1 | zt, ut) ] , and note that the Entropy H(·) and Kullback-Leibler DKL(·‖·) terms are analytically tractable when Q is restricted to a suitably chosen variational family (i.e. in our experiments, Q(ẑt+1 | xt+1) and Q(zt | ẑt+1, xt, ut) are factorized Gaussians). The derivation is provided in Appendix C.1. Interestingly, the consistency loss R′′2 admits a similar treatment. We note that the consistency loss seeks to match the distribution of ẑt+1 | xt, ut with zt+1 | xt+1, which we represent below as R′′2 (P̂ ) = DKL ( P̂ (zt+1 | xt+1)‖P̂ (ẑt+1 | xt, ut) ) = −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 [ log P̂ (ẑt+1 | xt, ut) ] . Here, P̂ (ẑt+1 | xt, ut) is intractable due to the marginalization of zt. We employ the same procedure as in (6) to construct a tractable variational bound R′′2 (P̂ ) ≤ −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 EQ(zt|ẑt+1,xt,ut) [ log P̂ (zt, ẑt+1 | xt, ut) Q(zt | ẑt+1, xt, ut) ] . We now make the further simplifying assumption that Q(ẑt+1 | xt+1) = P̂ (ẑt+1 | xt+1). This allows us to rewrite the expression as R′′2 (P̂ ) ≤ −H(Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1 | zt, ut) ] + EQ(ẑt+1|xt+1) [ DKL(Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt)) ] = R′′2,Bound(P̂ , Q), (7) which is a subset of the terms in (6). See Appendix C.2 for a detailed derivation. 4.2 CURVATURE REGULARIZATION AND AMORTIZED GRADIENT In practice we use a variant of the curvature loss where Taylor expansions and gradients are evaluated at z̄ = z + z and ū = u+ u, RLLC(P̂ ) = E ∼N (0,δI)[‖fZ(z̄, ū)− (∇zfZ(z̄, ū) z +∇ufZ(z̄, ū) u)− fZ(z, u)‖22]. (8) When nz is large, evaluation and differentiating through the Jacobians can be slow. To circumvent this issue, the Jacobians evaluation can be amortized by treating the Jacobians as the coefficients of the best linear approximation at the evaluation point. This leads to a new amortized curvature loss RLLC-Amor(P̂ , A,B) = E ∼N (0,δI)[‖fZ(z̄, ū)− (A(z̄, ū) z +B(z̄, ū) u − fZ(z, u))‖22]. (9) where A and B are function approximators to be optimized. Intuitively, the amortized curvature loss seeks—for any given (z, u)—to find the best choice of linear approximation induced by A(z, u) and B(z, u) such that the behavior of Fµ in the neighborhood of (z, u) is approximately linear. 5 RELATION TO PREVIOUS EMBED-TO-CONTROL APPROACHES In this section, we highlight the key differences between PCC and the closest previous works, namely E2C and RCE. A key distinguishing factor is PCC’s use of a nonlinear latent dynamics model paired with an explicit curvature loss. In comparison, E2C and RCE both employed “locally-linear dynamics” of the form z ′ = A(z̄, ū)z +B(z̄, ū)u+ c(z̄, ū) where z̄ and ū are auxiliary random variables meant to be perturbations of z and u. When contrasted with (9), it is clear that neither A and B in the E2C/RCE formulation can be treated as the Jacobians of the dynamics, and hence the curvature of the dynamics is not being controlled explicitly. Furthermore, since the locally-linear dynamics are wrapped inside the maximum-likelihood estimation, both E2C and RCE conflate the two key elements prediction and curvature together. This makes controlling the stability of training much more difficult. Not only does PCC explicitly separate these two components, we are also the first to explicitly demonstrate theoretically and empirically that the curvature loss is important for iLQR. Furthermore, RCE does not incorporate PCC’s consistency loss. Note that PCC, RCE, and E2C are all Markovian encoder-transition-decoder frameworks. Under such a framework, the sole reliance on minimizing the prediction loss will result in a discrepancy between how the model is trained (maximizing the likelihood induced by encoding-transitioning-decoding) versus how it is used at test-time for control (continual transitioning in the latent space without ever decoding). By explicitly minimizing the consistency loss, PCC reduces the discrapancy between how the model is trained versus how it is used at test-time for planning. Interestingly, E2C does include a regularization term that is akin to PCC’s consistency loss. However, as noted by the authors of RCE, E2C’s maximization of pair-marginal log-likelihoods of (xt, xt+1) as opposed to the conditional likelihood of xt+1 given xt means that E2C does not properly minimize the prediction loss prescribed by the PCC framework. 6 EXPERIMENTS In this section, we compare the performance of PCC with two model-based control algorithm baselines: RCE7 (Banijamali et al., 2018) and E2C (Watter et al., 2015), as well as running a thorough ablation study on various components of PCC. The experiments are based on the following continuous control benchmark domains (see Appendix D for more descriptions): (i) Planar System, (ii) Inverted Pendulum, (iii) Cartpole, (iv) 3-link manipulator, and (v) TORCS simulator8 (Wymann et al., 2000). To generate our training and test sets, each consists of triples (xt, ut, xt+1), we: (1) sample an underlying state st and generate its corresponding observation xt, (2) sample an action ut, and (3) obtain the next state st+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ2Ins , and generate corresponding observation xt+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (st, ut) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models’ performance under various degree of noise. Each task has underlying start and goal states that are unobservable to the algorithms, instead, the algorithms have access to the corresponding start and goal observations. We apply control using the iLQR algorithm (see Appendix B), with the same cost function that was used by RCE and E2C, namely, c̄(zt, ut) = (zt − zgoal)>Q(zt − zgoal) + u>t Rut, and c̄(zT ) = (zT − zgoal)>Q(zT − zgoal), where zgoal is obtained by encoding the goal observation, and Q = κ · Inz , R = Inu9. Details of our implementations are specified in Appendix D.3. We report performance in the underlying system, specifically the percentage of time spent in the goal region10. A Reproducible Experimental Pipeline In order to measure performance reproducibility, we perform the following 2-step pipeline. For each control task and algorithm, we (1) train 10 models 7For the RCE implementation, we directly optimize the ELBO loss in Equation (16) of the paper. We also tried the approach reported in the paper on increasing the weights of the two middle terms and then annealing them to 1. However, in practice this method is sensitive to annealing schedule and has convergence issues. 8See a control demo on the TORCS simulator at https://youtu.be/GBrgALRZ2fw 9According to the definition of latent cost c̄(z, u) = D ◦ c(z, u), its quadratic approximation is given by c̄(z, u) ≈ [ z − zgoal u ]> [∇z ∇u ] D◦c|z=zgoal,u=0 + 1 2 [ z − zgoal u ]> [∇2zz ∇2zu ∇2uz ∇2uu ] D◦c|z=zgoal,u=0 [ z − zgoal u ] . Yet for simplicity, we choose the same latent cost as in RCE and E2C with fixed, tunable matrices Q and R. 10Another possible metric is the average distance to goal, which has a similar behavior. independently, and (2) solve 10 control tasks per model (we do not cherry-pick, but instead perform a total of 10× 10 = 100 control tasks). We report statistics averaged over all the tasks (in addition, we report the best performing model averaged over its 10 tasks). By adopting a principled and statistically reliable evaluation pipeline, we also address a pitfall of the compared baselines where the best model needs to be cherry picked, and training variance was not reported. Results Table 1 shows how PCC outperforms the baseline algorithms in the noiseless dynamics case by comparing means and standard deviations of the means on the different control tasks (for the case of added noise to the dynamics, which exhibits similar behavior, refer to Appendix E.1). It is important to note that for each algorithm, the performance metric averaged over all models is drastically different than that of the best model, which justifies our rationale behind using the reproducible evaluation pipeline and avoid cherry-picking when reporting. Figure 2 depicts 2 instances (randomly chosen from the 10 trained models) of the learned latent space representations on the noiseless dynamics of Planar and Inverted Pendulum tasks for PCC, RCE, and E2C models (additional representations can be found in Appendix E.2). Representations were generated by encoding observations corresponding to a uniform grid over the state space. Generally, PCC has a more interpretable representation of both Planar and Inverted Pendulum Systems than other baselines for both the noiseless dynamics case and the noisy case. Finally, in terms of computation, PCC demonstrates faster training with 64% improvement over RCE, and 2% improvement over E2C.11 Ablation Analysis On top of comparing the performance of PCC to the baselines, in order to understand the importance of each component in (PCC-LOSS), we also perform an ablation analysis on the consistency loss (with/without consistency loss) and the curvature loss (with/without curvature loss, and with/without amortization of the Jacobian terms). Table 2 shows the ablation analysis of PCC on the aforementioned tasks. From the numerical results, one can clearly see that when consistency loss is omitted, the control performance degrades. This corroborates with the theoretical results in Section 3.2, which indicates the relationship of the consistency loss and the estimation error between the next-latent dynamics prediction and the next-latent encoding. This further implies that as the consistency term vanishes, the gap between control objective function and the model training loss is widened, due to the accumulation of state estimation error. The control performance also decreases when one removes the curvature loss. This is mainly attributed to the error between the iLQR control algorithm and (SOC2). Although the latent state dynamics model is parameterized with neural networks, which are smooth, without enforcing the curvature loss term the norm of the Hessian (curvature) might still be high. This also confirms with the analysis in Section 3.3 about sub-optimality performance and curvature of latent dynamics. Finally, we observe that the performance of models trained without amortized curvature loss are slightly better than with their amortized counterpart, however, since the amortized curvature loss does not require computing gradient of the latent dynamics (which means that in stochastic optimization one does not need to estimate its Hessian), we observe relative speed-ups in model training with the amortized version (speed-up of 6%, 9%, and 15% for Planar System, Inverted Pendulum, and Cartpole, respectively). 7 CONCLUSION In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and 11Comparison jobs were deployed on the Planar system using Nvidia TITAN Xp GPU. the embedded observations. Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable. All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller. We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics. A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods. A TECHNICAL PROOFS OF SECTION 3 A.1 PROOF OF LEMMA 1 Following analogous derivations of Lemma 11 in Petrik et al. (2016) (for the case of finite-horizon MDPs), for the case of finite-horizon MDPs, one has the following chain of inequalities for any given control sequence {ut}T−1t=0 and initial observation x0: |L(U, P̂ , x0)− L(U,P, x0)| = ∣∣∣∣∣E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) | P̂ , x0 ] − E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) |P, x0 ]∣∣∣∣∣ ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] , where DTV is the total variation distance of two distributions. The first inequality is based on the result of the above lemma, the second inequality is based on Pinsker’s inequality (Ordentlich & Weinberger, 2005), and the third inequality is based on Jensen’s inequality (Boyd & Vandenberghe, 2004) of √ (·) function. Now consider the expected cumulative KL cost: E [ 1 T ∑T−1 t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] with respect to some arbitrary control action sequence {ut}T−1t=0 . Notice that this arbitrary action sequence can always be expressed in form of deterministic policy ut = π′(xt, t) with some nonstationary state-action mapping π′. Therefore, this KL cost can be written as: E [ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, π, x0 ] =E [ 1 T T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut))dπ′(ut|xt, t) | P, x0 ] =E [ 1 T T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut)) · dπ′(ut|xt, t) dU(ut) · dU(ut) | P, x0 ] ≤U · Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] , (10) where the expectation is taken over the state-action occupation measure 1T ∑T−1 t=0 P(xt = x, ut = u|x0, U) of the finite-horizon problem that is induced by data-sampling policy U . The last inequality is due to change of measures in policy, and the last inequality is due to the facts that (i) π is a deterministic policy, (ii) dU(ut) is a sampling policy with lebesgue measure 1/U over all control actions, (iii) the following bounds for importance sampling factor holds: ∣∣∣dπ′(ut|xt,t)dU(ut) ∣∣∣ ≤ U . To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model P̂ and control sequence U : |L(U, P̂ , x0)− L(U,P, x0)| ≤ √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] . (11) For the second part of the proof, consider the solution of (SOC3), namely (U∗3 , P̂ ∗ 3 ). Using the optimality condition of this problem one obtains the following inequality: L(U∗3 , P̂ ∗ 3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≤L(U∗1 , P̂ ∗3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] . (12) Using the results in (11) and (12), one can then show the following chain of inequalities: L(U∗1 , P, c, x0) ≥L(U∗1 , P̂ ∗3 , c, x0)− √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] =L(U∗1 , P̂ ∗ 3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≥L(U∗3 , P̂ ∗3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≥L(U∗3 , P, c, x0)− 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] , (13) where U∗1 is the optimizer of (SOC1) and (U ∗ 3 , P̂ ∗ 3 ) is the optimizer of (SOC3). Therefore by letting λ3 = √ 2T 2 · cmaxU and R3(P̂ ) = Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] and by combining all of the above arguments, the proof of the above lemma is completed. A.2 PROOF OF LEMMA 2 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 , and any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P̂ ,x0] = ∫ x0:t−1∈X t t−1∏ k=1 dP̂ (xk|xk−1, uk−1)·∫ zt∈Z ∫ z′t−1∈Z dE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1) ∫ xt∈X dD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) . Now consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] = ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1∈Z dF (zt−1|z′t−2, ut−2)( c̄(zt−1, ut−1) + ∫ xt−1∈X dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) ) ≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ zt−2∈Z dE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) ) + cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) By continuing the above expansion, one can show that∣∣∣E [L(U,F, c, z0) | E, x0]− L(U, P̂ , c, x0)∣∣∣ ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] , where the last inequality is based on Jensen’s inequality of √ (·) function. For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2): L(U∗3 , P̂ ∗ 3 , c, x0) ≥E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0]− √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] =E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] ≥L(U∗2 , P̂ ∗2 , c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2 · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] ︸ ︷︷ ︸ R′′2 (P̂ ∗ 2 ) , (14) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A.3 PROOF OF COROLLARY 1 To start with, the total-variation distance DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) can be bounded by the following inequality using triangle inequality: DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤DTV (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) +DTV (∫ x′∈X dP (x′|x, u)E(·|x′)|| ∫ x′∈X dP̂ (x′|x, u)E(·|x′) ) ≤DTV (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) +DTV ( P (·|x, u)||P̂ (·|x, u) ) where the second inequality follows from the convexity property of the DTV-norm (w.r.t. convex weights E(·|x′), ∀x′). Then by Pinsker’s inequality, one obtains the following inequality: DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) + √ 2KL ( P (·|x, u)||P̂ (·|x, u) ) . (15) We now analyze the batch consistency regularizer: R′′2 (P̂ ) = Ex,u,x′ [KL(E(·|x′)||(F ◦ E)(·|x, u))] and connect it with the inequality in (15). Using Jensen’s inequality of convex function x log x, for any observation-action pair (x, u) sampled from Uτ , one can show that∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′)) . (16) Therefore, for any observation-control pair (x, u) the following inequality holds: KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) = ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) − ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′))− ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) =KL(E(·|x′)||(F ◦ E)(·|x, u)) (17) By taking expectation over (x, u) one can show that Ex,u [ KL( ∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u)) ] is the lower bound of the batch consistency regularizer. Therefore, the above arguments imply that DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2 √ R′′2 (P̂ ) +R3(P̂ ). (18) The inequality is based on the property that √ a+ √ b ≤ √ 2 √ a+ b. Equipped with the above additional results, the rest of the proof on the performance bound follows directly from the results from Lemma 2, in which here we further upper-bound DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) , when P̂ = P̂ ∗2 . A.4 PROOF OF LEMMA 3 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 and for any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P, x0] = cmax · ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−1, ut−1)||P̂ (·|xt−1, ut−1)) + ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1) ∫ zt∈Z ∫ z′t−1∈Z dE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1) ∫ xt∈X dD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) . Now consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P, x0] = ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1 dF (zt−1|z′t−2, ut−2)·( c̄(zt−1, ut−1) + ∫ xt−1 dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) ) + cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j)) ≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ zt−2∈Z dE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) ) + cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j)) + cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) . Continuing the above expansion, one can show that |E [L(U,F, c, z0) | E, x0]− L(U,P, x0)| ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) +DTV( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 3KL(P (·|xt, ut)||P̂ (·|xt, ut)) + 2KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] , where the last inequality is based on the fact that √ a+ √ b ≤ √ 2 √ a+ b and is based on Jensen’s inequality of √ (·) function. For the second part of the proof, following similar arguments from Lemma 2, one can show the following inequality for the solution of (SOC3) and (SOC2): L(U∗1 , P, c, x0) ≥E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0]− √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) =E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) − 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) − 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) ≥L(U∗2 , P, c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2 · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ), (19) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A.5 PROOF OF LEMMA 4 A Recap of the Result: Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action pair {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t}T−1t=0 is the optimal trajectory of problem (SOC2). Then with probability 1 − η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0) − 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) . Discussions of the effect of δ on LLC Performance: The result of this lemma shows that when the nominal state and actions are δ-close to the optimal trajectory of (SOC2), i.e., at each time step (zt,ut) is a sample from the Gaussian distribution centered at (z∗2,t, u ∗ 2,t) with standard deviation δ, then one can obtain a performance bound of LLC algorithm that is in terms of the regularization loss RLLC. To quantify the above condition, one can use Mahalanobis distance (De Maesschalck et al., 2000) to measure the distance of (zt,ut) to distribution N ((z∗2,t, u∗2,t), δ2I), i.e., we want to check for the condition: ‖(zt,ut)− (z∗2,t, u∗2,t)‖ δ ≤ ′, ∀t, for any arbitrary error tolerance ′ > 0. While we cannot verify the condition without knowing the optimal trajectory {(z∗2,t, u∗2,t)}T−1t=0 , the above condition still offers some insights in choosing the parameter δ based on the trade-off of designing nominal trajectory {(zt,ut)}T−1t=0 and optimizing RLLC. When δ is large, the low-curvature regularization imposed by the RLLC regularizer will cover a large portion of the state-action space. In the extreme case when δ →∞, RLLC can be viewed as a regularizer that enforces global linearity. Here the trade-off is that the loss RLLC is generally higher, which in turn degrades the performance bound of the LLC control algorithm in Lemma 4. On the other hand, when δ is small the low-curvature regularization in RLLC only covers a smaller region of the latent state-action space, and thus the loss associated with this term is generally lower (which provides a tighter performance bound in Lemma 4). However the performance result will only hold when (zt,ut) happens to be close to (z∗2,t, u ∗ 2,t) at each time-step t ∈ {0, . . . , T − 1}. Proof: For simplicity, we will focus on analyzing the noiseless case when the dynamics is deterministic (i.e., Σw = 0). Extending the following analysis for the case of non-deterministic dynamics should be straight-forward. First, consider any arbitrary latent state-action pair (z, u), such that the corresponding nominal state-action pair (z,u) is constructed by z = z− δz, u = u− δu, where (δz, δu) is sampled from the Gaussian distribution N (0, δ2I). (The random vectors are denoted as (δz′, δu′)) By the two-tailed Bernstein’s inequality (Murphy, 2012), for any arbitrarily given η ∈ (0, 1] one has the following inequality with probability 1− η: |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| ≤ √ 2 log(2/η) √ V(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)] + ∣∣E(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)]∣∣ ≤(1 + √ 2 log(2/η)) ( E(δz′,δu′)∼N (0,δ2I) [ ‖fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)‖2 ]︸ ︷︷ ︸ RLLC(P̂ |z,u) )1/2 . The second inequality is due to the basic fact that variance is less than second-order moment of a random variable. On the other hand, at each time step t ∈ {0, . . . , T −1} by the Lipschitz property of the immediate cost, the value function Vt(z) = minUt:T−1 E [ cT (zT ) + ∑T−1 τ=t cτ (zτ , uτ ) | zt = z ] is also Lipchitz with constant (T − t+ 1)clip. Using the Lipschitz property of Vt+1, for any (z, u) and (δz, δu), such that (z,u) = (z − δz, u− δu), one has the following property: |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z, u))| ≤(T − t)clip · |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| , (20) Therefore, at any arbitrary state-action pair (z̃, ũ), for z = z − δz, and u = ũ− δu with Gaussian sample (δz, δu) ∼ N (0, δ2I), the following inequality on the value function holds w.p. 1− η: Vt+1(fZ(z̃, ũ)) ≥ Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ), which further implies ct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) ≥ct(z̃, ũ) + Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ), Now let ũ∗ be the optimal control w.r.t. Bellman operator Tt[Vt+1](z̃) at any latent state z̃. Based on the assumption of this lemma, at each state z̃ the nominal latent state-action pair (z,u) is generated by perturbing (z̃, ũ∗) with Gaussian sample (δz, δu) ∼ N (0, δ2I) that is in form of z = z̃ − δz, u = ũ− δu. Then by the above arguments the following chain of inequalities holds w.p. 1− η: Tt[Vt+1](z̃) := min ũ ct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) =ct(z̃, ũ ∗) + Vt+1(fZ(z̃, ũ ∗)) ≥ct(z̃, ũ∗) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z̃, ũ∗))| ≥ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u) ≥min δu ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u) (21) Recall the LLC loss function is given by RLLC(P̂ ) = Ex,u [ E [ RLLC(P̂ |z, u) | z ] | E ] . Also consider the Bellman operator w.r.t. latent SOC: Tt[V ](z) = minu ct(z, u) + V (fZ(z, u)), and the Bellman operator w.r.t. LLC: Tt,LLC[V ](z) = minδu ct(z, δu+ u) + V (fZ(z,u) +A(z,u)δz + B(z,u)δu). Utilizing these definitions, the inequality in (21) can be further expressed as Tt[Vt+1](z̃) ≥Tt,LLC[Vt+1](z̃)− (T − t)clipcmax(1 + √ 2 log(2/η)) √ UX √ RLLC(P̂ ), (22) This inequality is due to the fact that all latent states are generated by the encoding observations, i.e., z ∼ E(·|x), and thus by following analogous arguments as in the proof of Lemma 1, one has max z,u RLLC(P̂ |z, u) ≤ UXEx,u [ E [ RLLC(P̂ |z, u) | z ] | E ] = UXRLLC(P̂ ). Therefore, based on the dynamic programming result that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in Bertsekas (1995)), the above inequality implies the following bound in the value function, w.p. 1− η: min U,P̂ L(U,F, c, z0) ≥L(U∗LLC, P̂ ∗LLC, c, z0)− T−1∑ t=1 (T − t) · clipcmax · T · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC) ≥L(U∗LLC, P̂ ∗LLC, c, z0)− T 2 · clipcmax · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC). (23) Notice that here we replace η in the result in (22) with η/T . In order to prove (23), we utilize (22) for each t ∈ {0, . . . , T − 1}, and this replacement is the result of applying the Union Probability bound (Murphy, 2012) (to ensure (23) holds with probability 1− η). Therefore the proof is completed by combining the above result with that in Lemma 3. B THE LATENT SPACE ILQR ALGORITHM B.1 PLANNING IN THE LATENT SPACE (HIGH-LEVEL DESCRIPTION) We follow the same control scheme as in Banijamali et al. (2018). Namely, we use the iLQR (Li & Todorov, 2004) solver to plan in the latent space. Given a start observation xstart and a goal observation xgoal, corresponding to underlying states {sstart, sgoal}, we encode the observations to retrieve zstart and zgoal. Then, the procedure goes as follows: we initialize a random trajectory (sequence of actions), feed it to the iLQR solver and apply the first action from the trajectory the solver outputs. We observe the next observation returned from the system (closed-loop control), and feed the updated trajectory to the iLQR solver. This procedure continues until the it reaches the end of the problem horizon. We use a receding window approach, where at every planning step the solver only optimizes for a fixed length of actions sequence, independent of the problem horizon. B.2 DETAILS ABOUT ILQR IN THE LATENT SPACE Consider the latent state SOC problem min U E [ cT (zT ) + T−1∑ t=0 ct(zt, ut) | z0 ] . At each time instance t ∈ {0, . . . , T} the value function of this problem is given by VT (z) = cT (z), Vt(z) = min Ut:T−1
1. What is the focus of the paper regarding policy learning for dynamic control problems? 2. What are the strengths of the proposed regularization strategy, particularly in its design principles? 3. Do you have any concerns or questions regarding the approach, such as its notation, proof details, or practical implementation? 4. How do the three design principles for regularization support the learning process, and are there any potential issues with their application? 5. What are the challenges in balancing the three parameters in practice, and how can they be addressed?
Review
Review This work proposes a regularization strategy for learning optimal policy for a dynamic control problem in a latent low-dimensional domain. The work is based on LCE approach, but with in-depth analysis on how to choose/design the regularization for the \hat{P} operator, which consists of an encoder, a decoder, and dynamics in the latent space. In particular, the author argued that three principles (prediction, consistency, and curvature) should be taken into consideration when designing the regularizer of the learning cost function - so that the learned latent domain can serve better for the purpose of optimizing the long-term cost in the ambient domain. The paper is well written and pleasant to read. One possible shortcoming is that the notations are a bit dazzling. It is almost impossible to follow the notation when first reading this paper. The proofs are very lengthy and thus the reviewer did not check in detail. The reviewer has several question: 1) Of course SOC2 makes sense. But what if one models the whole problem as an HMM, and perform control algorithms in the hidden domain of the HMM (and the hidden states can be of much smaller alphabets compared to the observable states), will there be any fundamental difference? Of course learning an HMM is challenging, but approachable. Any comments? 2) The three design principles make sense, but may need more elaboration. For example, it is a bit unclear why f_Z should be with low curvature -- does it mean that you wish the control problem in the latent domain is more like a linear dynamical system, so that the LLC algorithm works better? The argument is a bit unclear, since "locally linear" is not a rigorous term. Any smooth function is ``"locally linear". Here, how to measure the difficulty of the latent control problem may need more discussion. Minor: btw, (5) may contain some typos. 3) In practice, how to balance the three parameters lambda_p, lambda_c, lambda_cur?
ICLR
Title Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control Abstract Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lowerdimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control. 1 INTRODUCTION Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction. Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment. While the merits of this decomposition have been demonstrated in low-dimensional environments (Deisenroth & Rasmussen, 2011; Gal et al., 2016), scaling these methods to high-dimensional environments remains an open challenge. The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (Watter et al., 2015; Ha & Schmidhuber, 2018; Kurutach et al., 2018). This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (Watter et al., 2015; Banijamali et al., 2018; Finn et al., 2016; Chua et al., 2018; Ha & Schmidhuber, 2018; Kaiser et al., 2019; Hafner et al., 2018; Zhang et al., 2019). One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (Watter et al., 2015; Banijamali et al., 2018; Hafner et al., 2018; Zhang et al., 2019). We refer to this approach as learning controllable embedding (LCE). There have been two main approaches to this problem: 1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion (Hafner et al., 2018; Zhang et al., 2019), and 2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control (Watter et al., 2015; Banijamali et al., 2018). This can be later combined with RL for extra fine-tuning of the model and control. In this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms? We argue from an optimal control standpoint that our latent space should exhibit three properties. The first is prediction: given the ability to encode to and decode from the latent space, we expect ∗Equal contribution. Correspondence to nirlevine@google.com the process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics. The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory. Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms. Our contributions are thus as follows: (1) We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space. (2) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model. (3) To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel. We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently. (4) Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C (Watter et al., 2015) and RCE (Banijamali et al., 2018) on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature. 2 PROBLEM FORMULATION We are interested in controlling the non-linear dynamical systems of the form st+1 = fS(st, ut) +w, over the horizon T . In this definition, st ∈ S ⊆ Rns and ut ∈ U ⊆ Rnu are the state and action of the system at time step t ∈ {0, . . . , T − 1}, w is the Gaussian system noise, and fS is a smooth non-linear system dynamics. We are particularly interested in the scenario in which we only have access to the high-dimensional observation xt ∈ X ⊆ Rnx of each state st (nx ns). This scenario has application in many real-world problems, such as visual-servoing (Espiau et al., 1992), in which we only observe high-dimensional images of the environment and not its underlying state. We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = {ut}T−1t=0 , the observation sequence {xt}Tt=0 is generated by a stationary Markov process, i.e., xt+1 ∼ P (·|xt, ut), ∀t ∈ {0, . . . , T − 1}.1 A common approach to control the above dynamical system is to solve the following stochastic optimal control (SOC) problem (Shapiro et al., 2009) that minimizes expected cumulative cost: min U L(U,P, c, x0) := E [ cT (xT ) + T−1∑ t=0 ct(xt, ut) | P, x0 ] , 2 (SOC1) where ct : X ×U → R≥0 is the immediate cost function at time t, cT ∈ R≥0 is the terminal cost, and x0 is the observation at the initial state s0. Note that all immediate costs are defined in the observation space X , and are bounded by cmax > 0 and Lipschitz with constant clip > 0. For example, in visualservoing, (SOC1) can be formulated as a goal tracking problem (Ebert et al., 2018), where we control the robot to reach the goal observation xgoal, and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error E[ ∑ t ‖xt − xgoal‖2 | P, x0]. Since the observations x are high dimensional and the dynamics in the observation space P (·|xt, ut) is unknown, solving (SOC1) is often intractable. To address this issue, a class of algorithms has been recently developed that is based on learning a low-dimensional latent (embedding) space Z ⊆ Rnz (nz nx) and latent state dynamics, and performing optimal control there. This class that we refer to as learning controllable embedding (LCE) throughout the paper, include recently developed algorithms, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), and SOLAR (Zhang et al., 2019). The main idea behind the LCE approach is to learn a triplet, (i) an encoderE : X → P(Z); (ii) a dynamics in the latent space F : Z ×U → P(Z); and (iii) a decoder D : Z → P(X ). These in turn can be thought of as defining a (stochastic) mapping P̂ : X ×U → P(X ) of the form P̂ = D ◦F ◦E. We then wish to solve the SOC in latent space Z: min U,P̂ E [ L(U,F, c, z0) | E, x0 ] + λ2 √ R2(P̂ ), (SOC2) such that the solution of (SOC2), U∗2 , has similar performance to that of (SOC1), U ∗ 1 , i.e., L(U∗1 , P, c, x0) ≈ L(U∗2 , P, c, x0). In (SOC2), z0 is the initial latent state sampled from the encoder E(·|x0); c̄ : Z × U → R≥0 is the latent cost function defined as c̄t(zt, ut) =∫ ct(xt, ut)dD(xt|zt); R2(P̂ ) is a regularizer over the mapping P̂ ; and λ2 is the corresponding 1A method to ensure this Markovian assumption is by buffering observations (Mnih et al., 2013) for a number of time steps. 2See Appendix B.3 for the extension to the closed-loop MDP problem. tion SOC2 under dynamics F , and (c)(red) in equation SOC3 under dynamics P̂ . regularization parameter. We will define R2 and λ2 more precisely in Section 3. Note that the expectation in (SOC2) is over the randomness generated by the (stochastic) encoder E. 3 PCC MODEL: A CONTROL PERSPECTIVE As described in Section 2, we are primarily interested in solving (SOC1), whose states evolve under dynamics P , as shown at the bottom row of Figure 1(a) in (blue). However, because of the difficulties in solving (SOC1), mainly due to the high dimension of observations x, LCE proposes to learn a mapping P̂ by solving (SOC2) that consists of a loss function, whose states evolve under dynamics F (after an initial transition by encoder E), as depicted in Figure 1(b), and a regularization term. The role of the regularizer R2 is to account for the performance gap between (SOC1) and the loss function of (SOC2), due to the discrepancy between their evolution paths, shown in Figures 1(a)(blue) and 1(b)(green). The goal of LCE is to learn P̂ of the particular form P̂ = D ◦ F ◦ E, described in Section 2, such that the solution of (SOC2) has similar performance to that of (SOC1). In this section, we propose a principled way to select the regularizer R2 to achieve this goal. Since the exact form of (SOC2) has a direct effect on learning P̂ , designing this regularization term, in turn, provides us with a recipe (loss function) to learn the latent (embedded) space Z . In the following subsections, we show that this loss function consists of three terms that correspond to prediction, consistency, and curvature, the three ingredients of our PCC model. Note that these two SOCs evolve in two different spaces, one in the observation space X under dynamics P , and the other one in the latent space Z (after an initial transition from X to Z) under dynamics F . Unlike P and F that only operate in a single space, X and Z , respectively, P̂ can govern the evolution of the system in both X and Z (see Figure 1(c)). Therefore, any recipe to learn P̂ , and as a result the latent space Z , should have at least two terms, to guarantee that the evolution paths resulted from P̂ in X and Z are consistent with those generated by P and F . We derive these two terms, that are the prediction and consistency terms in the loss function used by our PCC model, in Sections 3.1 and 3.2, respectively. While these two terms are the result of learning P̂ in general SOC problems, in Section 3.3, we concentrate on the particular class of LLC algorithms (e.g., iLQR (Li & Todorov, 2004)) to solve SOC, and add the third term, curvature, to our recipe for learning P̂ . 3.1 PREDICTION OF THE NEXT OBSERVATION Figures 1(a)(blue) and 1(c)(red) show the transition in the observation space under P and P̂ , where xt is the current observation, and xt+1 and x̂t+1 are the next observations under these two dynamics, respectively. Instead of learning a P̂ with minimum mismatch with P in terms of some distribution norm, we propose to learn P̂ by solving the following SOC: min U,P̂ L(U, P̂ , c, x0) + λ3 √ R3(P̂ ), (SOC3) whose loss function is the same as the one in (SOC1), with the true dynamics replaced by P̂ . In Lemma 1 (see Appendix A.1, for proof), we show how to set the regularization term R3 in (SOC3), such that the control sequence resulted from solving (SOC3), U∗3 , has similar performance to the solution of (SOC1), U∗1 , i.e., L(U ∗ 1 , P, c, x0) ≈ L(U∗3 , P, c, x0). Lemma 1. Let U∗1 be a solution to (SOC1) and (U∗3 , P̂ ∗3 ) be a solution to (SOC3) with R3(P̂ ) = Ex,u [ DKL ( P (·|x, u)||P̂ (·|x, u) )] and λ3 = √ 2U · T 2cmax. (1) Then, we have L(U∗1 , P, c, x0) ≥ L(U∗3 , P, c, x0)− 2λ3 √ R3(P̂ ∗3 ). In Eq. 1, the expectation is over the state-action stationary distribution of the policy used to generate the training samples (uniformly random policy in this work), and U is the Lebesgue measure of U .3 3In the case when sampling policy is non-uniform and has no measure-zero set, 1/U is its minimum measure. 3.2 CONSISTENCY IN PREDICTION OF THE NEXT LATENT STATE In Section 3.1, we provided a recipe for learning P̂ (in form of D ◦ F ◦ E) by introducing an intermediate (SOC3) that evolves in the observation space X according to dynamics P̂ . In this section we first connect (SOC2) that operates in Z with (SOC3) that operates in X . For simplicity and without loss generality, assume the initial cost c0(x, u) is zero.4 Lemma 2 (see Appendix A.2, for proof) suggests how we shall set the regularizer in (SOC2), such that its solution performs similarly to that of (SOC3), under their corresponding dynamics models. Lemma 2. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′2(P̂ ) = Ex,u [ DKL (( E ◦ P̂ ) (·|x, u)|| ( F ◦ E ) (·|x, u) )] and λ2 = √ 2U · T 2cmax. (2) Then, we have L(U∗3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0)− 2λ2 √ R′2(P̂ ∗ 2 ) . Similar to Lemma 1, in Eq. 2, the expectation is over the state-action stationary distribution of the policy used to generate the training samples. Moreover, ( E ◦ P̂ ) (z′|x, u) = ∫ x′ E(z′|x′)dP̂ (x′|x, u) and ( F ◦E ) (z′|x, u) = ∫ z F (z′|z, u)dE(z|x) are the probability over the next latent state z′, given the current observation x and action u, in (SOC2) and (SOC3) (see the paths xt → zt → z̃t+1 and xt → zt → z̃t+1 → x̂t+1 → ẑt+1 in Figures 1(b)(green) and 1(c)(red)). Therefore R′2(P̂ ) can be interpreted as the measure of discrepancy between these models, which we term as consistency loss. Although Lemma 2 provides a recipe to learn P̂ by solving (SOC2) with the regularizer (2), unfortunately this regularizer cannot be computed from the data – that is of the form (xt, ut, xt+1) – because the first term in the DKL requires marginalizing over current and next latent states (zt and z̃t+1 in Figure 1(c)). To address this issue, we propose to use the (computable) regularizer R′′2 (P̂ ) = Ex,u,x′ [ DKL ( E(·|x′)|| ( F ◦ E ) (·|x, u) )] , (3) in which the expectation is over (x, u, x′) sampled from the training data. Corollary 1 (see Appendix A.3, for proof) bounds the performance loss resulted from using R′′2 (P̂ ) instead of R ′ 2(P̂ ), and shows that it could be still a reasonable choice. Corollary 1. Let (U∗3 , P̂ ∗3 ) be a solution to (SOC3) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R′′2 (P̂ ) and and λ2 defined by (3) and (2). Then, we have L(U ∗ 3 , P̂ ∗ 3 , c, x0) ≥ L(U∗2 , P̂ ∗2 , c, x0) − 2λ2 √ 2R′′2 (P̂ ∗ 2 ) + 2R3(P̂ ∗ 2 ) . Lemma 1 suggests a regularizer R3 to connect the solutions of (SOC1) and (SOC3). Similarly, Corollary 1 shows that regularizer R′′2 in (3) establishes a connection between the solutions of (SOC3) and (SOC2). Putting these results together, we achieve our goal in Lemma 3 (see Appendix A.4, for proof) to design a regularizer for (SOC2), such that its solution performs similarly to that of (SOC1). Lemma 3. Let U∗1 be a solution to (SOC1) and (U∗2 , P̂ ∗2 ) be a solution to (SOC2) with R2(P̂ ) = 3R3(P̂ ) + 2R ′′ 2 (P̂ ) and λ2 = 2 √ U · T 2cmax, (4) where R3(P̂ ) and R′′2 (P̂ ) are defined by (1) and (3). Then, we have L(U∗1 , P, c, x0) ≥ L(U∗2 , P, c, x0)− 2λ2 √ R2(P̂ ∗2 ) . 3.3 LOCALLY-LINEAR CONTROL IN THE LATENT SPACE AND CURVATURE REGULARIZATION In Sections 3.1 and 3.2, we derived a loss function to learn the latent space Z . This loss function, that was motivated by the general SOC perspective, consists of two terms to enforce the latent space to not only predict the next observations accurately, but to be suitable for control. In this section, we focus on the class of locally-linear control (LLC) algorithms (e.g., iLQR), for solving (SOC2), and show how this choice adds a third term, that corresponds to curvature, to the regularizer of (SOC2), and as a result, to the loss function of our PCC model. The main idea in LLC algorithms is to iteratively compute an action sequence to improve the current trajectory, by linearizing the dynamics around this trajectory, and use this action sequence to generate 4With non-zero initial cost, similar results can be derived by having an additional consistency term on x0. the next trajectory (see Appendix B for more details about LLC and iLQR). This procedure implicitly assumes that the dynamics is approximately locally linear. To ensure this in (SOC2), we further restrict the dynamics P̂ and assume that it is not only of the form P̂ = D ◦ F ◦ E, but F , the latent space dynamics, has low curvature. One way to ensure this in (SOC2) is to directly impose a penalty over the curvature of the latent space transition function fZ(z, u). Assume F (z, u) = fZ(z, u) + w, where w is a Gaussian noise. Consider the following SOC problem: min U,P̂ E [L(U,F, c, z0) | E, x0] + λLLC √ R2(P̂ ) +RLLC(P̂ ) , (SOC-LLC) where R2 is defined by (4); U is optimized by a LLC algorithm, such as iLQR; RLLC(P̂ ) is given by, RLLC(P̂ ) = Ex,u [ E [ fZ(z + z, u+ u)− fZ(z, u)− (∇zfZ(z, u) · z +∇ufZ(z, u) · u)‖22 ] | E ] , (5) where = ( z, u)> ∼ N (0, δ2I), δ > 0 is a tunable parameter that characterizes the “diameter" of latent state-action space in which the latent dynamics model has low curvature. λLLC = 2 √ 2T 2cmax √ U max ( clip(1 + √ 2 log(2T/η)) √ X/2, 1 ) , where 1/X is the minimum non-zero measure of the sample distribution w.r.t. X , and 1− η ∈ [0, 1) is a probability threshold. Lemma 4 (see Appendix A.5, for proof and discussions on how δ affects LLC performance) shows that a solution of (SOC-LLC) has similar performance to a solution of (SOC1, and thus, (SOC-LLC) is a reasonable optimization problem to learn P̂ , and also the latent space Z . Lemma 4. Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action trajectory {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t)}T−1t=0 is the optimal trajectory of (SOC2). Then with proba- bility 1− η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0)− 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) . In practice, instead of solving (SOC-LLC) jointly for U and P̂ , we treat (SOC-LLC) as a bi-level optimization problem, first, solve the inner optimization problem for P̂ , i.e., P̂ ∗ ∈ arg min P̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ), (PCC-LOSS) where R′3(P̂ ) = −Ex,u,x′ [log P̂ (x′|x, u)] is the negative log-likelihood,5 and then, solve the outer optimization problem, minU L(U, F̂ ∗, c̄, z0), where P̂ ∗ = D̂∗◦F̂ ∗◦Ê∗, to obtain the optimal control sequence U∗. Solving (SOC-LLC) this way is an approximation, in general, but is justified, when the regularization parameter λLLC is large. Note that we leave the regularization parameters (λp, λc, λcur) as hyper-parameters of our algorithm, and do not use those derived in the lemmas of this section. Since the loss for learning P̂ ∗ in (PCC-LOSS) enforces (i) prediction accuracy, (ii) consistency in latent state prediction, and (iii) low curvature over fZ , through the regularizers R′3, R ′′ 2 , and RLLC, respectively, we refer to it as the prediction-consistency-curvature (PCC) loss. 4 INSTANTIATING THE PCC MODEL IN PRACTICE The PCC-Model objective in (PCC-LOSS) introduces the optimization problem minP̂ λpR ′ 3(P̂ ) + λcR ′′ 2 (P̂ ) + λcurRLLC(P̂ ). To instantiate this model in practice, we describe P̂ = D ◦ F ◦ E as a latent variable model that factorizes as P̂ (xt+1, zt, ẑt+1 | xt, ut) = P̂ (zt | xt)P̂ (ẑt+1 | zt, ut)P̂ (xt+1 | ẑt+1). In this section, we propose a variational approximation to the intractable negative log-likelihood R′3 and batch-consistency R ′′ 2 losses, and an efficient approximation of the curvature loss RLLC. 4.1 VARIATIONAL PCC The negative log-likelihood 6 R′3 admits a variational bound via Jensen’s Inequality, R′3(P̂ ) = − log P̂ (xt+1 | xt, ut) = − logEQ(zt,ẑt+1|xt,ut,xt+1) [ P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ] ≤ −EQ(zt,ẑt+1|xt,ut,xt+1) [ log P̂ (xt+1, zt, ẑt+1 | xt, ut) Q(zt, ẑt+1 | xt, ut, xt+1) ] = R′3,NLE-Bound(P̂ , Q), (6) 5Since R3(P̂ ) is the sum of R′3(P̂ ) and the entropy of P , we replaced it with R′3(P̂ ) in (PCC-LOSS). 6For notation convenience, we drop the expectation over the empirical data that appears in various loss terms. which holds for any choice of recognition model Q. For simplicity, we assume the recognition model employs bottom-up inference and thus factorizes as Q(zt, ẑt+1|xt, xt+1, ut) = Q(ẑt+1|xt+1)Q(zt|ẑt+1, xt, ut). The main idea behind choosing a backward-facing model is to allow the model to learn to account for noise in the underlying dynamics. We estimate the expectations in (6) via Monte Carlo simulation. To reduce the variance of the estimator, we decompose R′3,NLE-Bound further into − EQ(ẑt+1|xt+1) [ log P̂ (xt+1|ẑt+1) ] + EQ(ẑt+1|xt+1) [ DKL ( Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt) )] −H (Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1 | zt, ut) ] , and note that the Entropy H(·) and Kullback-Leibler DKL(·‖·) terms are analytically tractable when Q is restricted to a suitably chosen variational family (i.e. in our experiments, Q(ẑt+1 | xt+1) and Q(zt | ẑt+1, xt, ut) are factorized Gaussians). The derivation is provided in Appendix C.1. Interestingly, the consistency loss R′′2 admits a similar treatment. We note that the consistency loss seeks to match the distribution of ẑt+1 | xt, ut with zt+1 | xt+1, which we represent below as R′′2 (P̂ ) = DKL ( P̂ (zt+1 | xt+1)‖P̂ (ẑt+1 | xt, ut) ) = −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 [ log P̂ (ẑt+1 | xt, ut) ] . Here, P̂ (ẑt+1 | xt, ut) is intractable due to the marginalization of zt. We employ the same procedure as in (6) to construct a tractable variational bound R′′2 (P̂ ) ≤ −H(P̂ (zt+1 | xt+1))− EP̂ (zt+1|xt+1) ẑt+1=zt+1 EQ(zt|ẑt+1,xt,ut) [ log P̂ (zt, ẑt+1 | xt, ut) Q(zt | ẑt+1, xt, ut) ] . We now make the further simplifying assumption that Q(ẑt+1 | xt+1) = P̂ (ẑt+1 | xt+1). This allows us to rewrite the expression as R′′2 (P̂ ) ≤ −H(Q(ẑt+1 | xt+1))− E Q(ẑt+1|xt+1) Q(zt|ẑt+1,xt,ut) [ log P̂ (ẑt+1 | zt, ut) ] + EQ(ẑt+1|xt+1) [ DKL(Q(zt | ẑt+1, xt, ut)‖P̂ (zt | xt)) ] = R′′2,Bound(P̂ , Q), (7) which is a subset of the terms in (6). See Appendix C.2 for a detailed derivation. 4.2 CURVATURE REGULARIZATION AND AMORTIZED GRADIENT In practice we use a variant of the curvature loss where Taylor expansions and gradients are evaluated at z̄ = z + z and ū = u+ u, RLLC(P̂ ) = E ∼N (0,δI)[‖fZ(z̄, ū)− (∇zfZ(z̄, ū) z +∇ufZ(z̄, ū) u)− fZ(z, u)‖22]. (8) When nz is large, evaluation and differentiating through the Jacobians can be slow. To circumvent this issue, the Jacobians evaluation can be amortized by treating the Jacobians as the coefficients of the best linear approximation at the evaluation point. This leads to a new amortized curvature loss RLLC-Amor(P̂ , A,B) = E ∼N (0,δI)[‖fZ(z̄, ū)− (A(z̄, ū) z +B(z̄, ū) u − fZ(z, u))‖22]. (9) where A and B are function approximators to be optimized. Intuitively, the amortized curvature loss seeks—for any given (z, u)—to find the best choice of linear approximation induced by A(z, u) and B(z, u) such that the behavior of Fµ in the neighborhood of (z, u) is approximately linear. 5 RELATION TO PREVIOUS EMBED-TO-CONTROL APPROACHES In this section, we highlight the key differences between PCC and the closest previous works, namely E2C and RCE. A key distinguishing factor is PCC’s use of a nonlinear latent dynamics model paired with an explicit curvature loss. In comparison, E2C and RCE both employed “locally-linear dynamics” of the form z ′ = A(z̄, ū)z +B(z̄, ū)u+ c(z̄, ū) where z̄ and ū are auxiliary random variables meant to be perturbations of z and u. When contrasted with (9), it is clear that neither A and B in the E2C/RCE formulation can be treated as the Jacobians of the dynamics, and hence the curvature of the dynamics is not being controlled explicitly. Furthermore, since the locally-linear dynamics are wrapped inside the maximum-likelihood estimation, both E2C and RCE conflate the two key elements prediction and curvature together. This makes controlling the stability of training much more difficult. Not only does PCC explicitly separate these two components, we are also the first to explicitly demonstrate theoretically and empirically that the curvature loss is important for iLQR. Furthermore, RCE does not incorporate PCC’s consistency loss. Note that PCC, RCE, and E2C are all Markovian encoder-transition-decoder frameworks. Under such a framework, the sole reliance on minimizing the prediction loss will result in a discrepancy between how the model is trained (maximizing the likelihood induced by encoding-transitioning-decoding) versus how it is used at test-time for control (continual transitioning in the latent space without ever decoding). By explicitly minimizing the consistency loss, PCC reduces the discrapancy between how the model is trained versus how it is used at test-time for planning. Interestingly, E2C does include a regularization term that is akin to PCC’s consistency loss. However, as noted by the authors of RCE, E2C’s maximization of pair-marginal log-likelihoods of (xt, xt+1) as opposed to the conditional likelihood of xt+1 given xt means that E2C does not properly minimize the prediction loss prescribed by the PCC framework. 6 EXPERIMENTS In this section, we compare the performance of PCC with two model-based control algorithm baselines: RCE7 (Banijamali et al., 2018) and E2C (Watter et al., 2015), as well as running a thorough ablation study on various components of PCC. The experiments are based on the following continuous control benchmark domains (see Appendix D for more descriptions): (i) Planar System, (ii) Inverted Pendulum, (iii) Cartpole, (iv) 3-link manipulator, and (v) TORCS simulator8 (Wymann et al., 2000). To generate our training and test sets, each consists of triples (xt, ut, xt+1), we: (1) sample an underlying state st and generate its corresponding observation xt, (2) sample an action ut, and (3) obtain the next state st+1 according to the state transition dynamics, add it a zero-mean Gaussian noise with variance σ2Ins , and generate corresponding observation xt+1.To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair (st, ut) uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic (σ = 0) and stochastic scenarios. In the stochastic case, we add noise to the system with different values of σ and evaluate the models’ performance under various degree of noise. Each task has underlying start and goal states that are unobservable to the algorithms, instead, the algorithms have access to the corresponding start and goal observations. We apply control using the iLQR algorithm (see Appendix B), with the same cost function that was used by RCE and E2C, namely, c̄(zt, ut) = (zt − zgoal)>Q(zt − zgoal) + u>t Rut, and c̄(zT ) = (zT − zgoal)>Q(zT − zgoal), where zgoal is obtained by encoding the goal observation, and Q = κ · Inz , R = Inu9. Details of our implementations are specified in Appendix D.3. We report performance in the underlying system, specifically the percentage of time spent in the goal region10. A Reproducible Experimental Pipeline In order to measure performance reproducibility, we perform the following 2-step pipeline. For each control task and algorithm, we (1) train 10 models 7For the RCE implementation, we directly optimize the ELBO loss in Equation (16) of the paper. We also tried the approach reported in the paper on increasing the weights of the two middle terms and then annealing them to 1. However, in practice this method is sensitive to annealing schedule and has convergence issues. 8See a control demo on the TORCS simulator at https://youtu.be/GBrgALRZ2fw 9According to the definition of latent cost c̄(z, u) = D ◦ c(z, u), its quadratic approximation is given by c̄(z, u) ≈ [ z − zgoal u ]> [∇z ∇u ] D◦c|z=zgoal,u=0 + 1 2 [ z − zgoal u ]> [∇2zz ∇2zu ∇2uz ∇2uu ] D◦c|z=zgoal,u=0 [ z − zgoal u ] . Yet for simplicity, we choose the same latent cost as in RCE and E2C with fixed, tunable matrices Q and R. 10Another possible metric is the average distance to goal, which has a similar behavior. independently, and (2) solve 10 control tasks per model (we do not cherry-pick, but instead perform a total of 10× 10 = 100 control tasks). We report statistics averaged over all the tasks (in addition, we report the best performing model averaged over its 10 tasks). By adopting a principled and statistically reliable evaluation pipeline, we also address a pitfall of the compared baselines where the best model needs to be cherry picked, and training variance was not reported. Results Table 1 shows how PCC outperforms the baseline algorithms in the noiseless dynamics case by comparing means and standard deviations of the means on the different control tasks (for the case of added noise to the dynamics, which exhibits similar behavior, refer to Appendix E.1). It is important to note that for each algorithm, the performance metric averaged over all models is drastically different than that of the best model, which justifies our rationale behind using the reproducible evaluation pipeline and avoid cherry-picking when reporting. Figure 2 depicts 2 instances (randomly chosen from the 10 trained models) of the learned latent space representations on the noiseless dynamics of Planar and Inverted Pendulum tasks for PCC, RCE, and E2C models (additional representations can be found in Appendix E.2). Representations were generated by encoding observations corresponding to a uniform grid over the state space. Generally, PCC has a more interpretable representation of both Planar and Inverted Pendulum Systems than other baselines for both the noiseless dynamics case and the noisy case. Finally, in terms of computation, PCC demonstrates faster training with 64% improvement over RCE, and 2% improvement over E2C.11 Ablation Analysis On top of comparing the performance of PCC to the baselines, in order to understand the importance of each component in (PCC-LOSS), we also perform an ablation analysis on the consistency loss (with/without consistency loss) and the curvature loss (with/without curvature loss, and with/without amortization of the Jacobian terms). Table 2 shows the ablation analysis of PCC on the aforementioned tasks. From the numerical results, one can clearly see that when consistency loss is omitted, the control performance degrades. This corroborates with the theoretical results in Section 3.2, which indicates the relationship of the consistency loss and the estimation error between the next-latent dynamics prediction and the next-latent encoding. This further implies that as the consistency term vanishes, the gap between control objective function and the model training loss is widened, due to the accumulation of state estimation error. The control performance also decreases when one removes the curvature loss. This is mainly attributed to the error between the iLQR control algorithm and (SOC2). Although the latent state dynamics model is parameterized with neural networks, which are smooth, without enforcing the curvature loss term the norm of the Hessian (curvature) might still be high. This also confirms with the analysis in Section 3.3 about sub-optimality performance and curvature of latent dynamics. Finally, we observe that the performance of models trained without amortized curvature loss are slightly better than with their amortized counterpart, however, since the amortized curvature loss does not require computing gradient of the latent dynamics (which means that in stochastic optimization one does not need to estimate its Hessian), we observe relative speed-ups in model training with the amortized version (speed-up of 6%, 9%, and 15% for Planar System, Inverted Pendulum, and Cartpole, respectively). 7 CONCLUSION In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and 11Comparison jobs were deployed on the Planar system using Nvidia TITAN Xp GPU. the embedded observations. Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable. All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller. We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics. A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods. A TECHNICAL PROOFS OF SECTION 3 A.1 PROOF OF LEMMA 1 Following analogous derivations of Lemma 11 in Petrik et al. (2016) (for the case of finite-horizon MDPs), for the case of finite-horizon MDPs, one has the following chain of inequalities for any given control sequence {ut}T−1t=0 and initial observation x0: |L(U, P̂ , x0)− L(U,P, x0)| = ∣∣∣∣∣E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) | P̂ , x0 ] − E [ cT (xT )+ T−1∑ t=0 ct(xt, ut) |P, x0 ]∣∣∣∣∣ ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] , where DTV is the total variation distance of two distributions. The first inequality is based on the result of the above lemma, the second inequality is based on Pinsker’s inequality (Ordentlich & Weinberger, 2005), and the third inequality is based on Jensen’s inequality (Boyd & Vandenberghe, 2004) of √ (·) function. Now consider the expected cumulative KL cost: E [ 1 T ∑T−1 t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, x0 ] with respect to some arbitrary control action sequence {ut}T−1t=0 . Notice that this arbitrary action sequence can always be expressed in form of deterministic policy ut = π′(xt, t) with some nonstationary state-action mapping π′. Therefore, this KL cost can be written as: E [ 1 T T−1∑ t=0 KL(P (·|xt, ut)||P̂ (·|xt, ut)) | P, π, x0 ] =E [ 1 T T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut))dπ′(ut|xt, t) | P, x0 ] =E [ 1 T T−1∑ t=0 ∫ ut∈U KL(P (·|xt, ut)||P̂ (·|xt, ut)) · dπ′(ut|xt, t) dU(ut) · dU(ut) | P, x0 ] ≤U · Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] , (10) where the expectation is taken over the state-action occupation measure 1T ∑T−1 t=0 P(xt = x, ut = u|x0, U) of the finite-horizon problem that is induced by data-sampling policy U . The last inequality is due to change of measures in policy, and the last inequality is due to the facts that (i) π is a deterministic policy, (ii) dU(ut) is a sampling policy with lebesgue measure 1/U over all control actions, (iii) the following bounds for importance sampling factor holds: ∣∣∣dπ′(ut|xt,t)dU(ut) ∣∣∣ ≤ U . To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model P̂ and control sequence U : |L(U, P̂ , x0)− L(U,P, x0)| ≤ √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] . (11) For the second part of the proof, consider the solution of (SOC3), namely (U∗3 , P̂ ∗ 3 ). Using the optimality condition of this problem one obtains the following inequality: L(U∗3 , P̂ ∗ 3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≤L(U∗1 , P̂ ∗3 , x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] . (12) Using the results in (11) and (12), one can then show the following chain of inequalities: L(U∗1 , P, c, x0) ≥L(U∗1 , P̂ ∗3 , c, x0)− √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] =L(U∗1 , P̂ ∗ 3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≥L(U∗3 , P̂ ∗3 , c, x0) + √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] ≥L(U∗3 , P, c, x0)− 2 √ 2T 2 · cmaxU · √ Ex,u [ KL(P (·|x, u)||P̂ ∗3 (·|x, u)) ] , (13) where U∗1 is the optimizer of (SOC1) and (U ∗ 3 , P̂ ∗ 3 ) is the optimizer of (SOC3). Therefore by letting λ3 = √ 2T 2 · cmaxU and R3(P̂ ) = Ex,u [ KL(P (·|x, u)||P̂ (·|x, u)) ] and by combining all of the above arguments, the proof of the above lemma is completed. A.2 PROOF OF LEMMA 2 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 , and any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P̂ ,x0] = ∫ x0:t−1∈X t t−1∏ k=1 dP̂ (xk|xk−1, uk−1)·∫ zt∈Z ∫ z′t−1∈Z dE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1) ∫ xt∈X dD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) . Now consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] = ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1∈Z dF (zt−1|z′t−2, ut−2)( c̄(zt−1, ut−1) + ∫ xt−1∈X dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) ) ≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP̂ (xk|xk−1, uk−1) · ∫ zt−2∈Z dE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) ) + cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) By continuing the above expansion, one can show that∣∣∣E [L(U,F, c, z0) | E, x0]− L(U, P̂ , c, x0)∣∣∣ ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 KL((E ◦ P̂ )(·|xt, ut)||(F ◦ E)(·|xt, ut)) | P, x0 ] , where the last inequality is based on Jensen’s inequality of √ (·) function. For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2): L(U∗3 , P̂ ∗ 3 , c, x0) ≥E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0]− √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] =E [L(U∗3 , F ∗3 , c, z0) | E∗3 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] − 2 √ 2T 2 · cmaxU · √ Ex,u [ KL((E∗3 ◦ P̂ ∗3 )(·|x, u)||(F ∗3 ◦ E∗3 )(·|x, u)) ] ≥L(U∗2 , P̂ ∗2 , c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2 · √ Ex,u [ KL((E∗2 ◦ P̂ ∗2 )(·|x, u)||(F ∗2 ◦ E∗2 )(·|x, u)) ] ︸ ︷︷ ︸ R′′2 (P̂ ∗ 2 ) , (14) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A.3 PROOF OF COROLLARY 1 To start with, the total-variation distance DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) can be bounded by the following inequality using triangle inequality: DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤DTV (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) +DTV (∫ x′∈X dP (x′|x, u)E(·|x′)|| ∫ x′∈X dP̂ (x′|x, u)E(·|x′) ) ≤DTV (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) +DTV ( P (·|x, u)||P̂ (·|x, u) ) where the second inequality follows from the convexity property of the DTV-norm (w.r.t. convex weights E(·|x′), ∀x′). Then by Pinsker’s inequality, one obtains the following inequality: DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) + √ 2KL ( P (·|x, u)||P̂ (·|x, u) ) . (15) We now analyze the batch consistency regularizer: R′′2 (P̂ ) = Ex,u,x′ [KL(E(·|x′)||(F ◦ E)(·|x, u))] and connect it with the inequality in (15). Using Jensen’s inequality of convex function x log x, for any observation-action pair (x, u) sampled from Uτ , one can show that∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′)) . (16) Therefore, for any observation-control pair (x, u) the following inequality holds: KL (∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) = ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (∫ x′∈X dP (x′|x, u)E(z′|x′) ) − ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) ≤ ∫ x′∈X dP (x′|x, u) ∫ z′∈Z dE(z′|x′) log (E(z′|x′))− ∫ x′∈X dP (x′|x, u) log (g(x′|x, u)) =KL(E(·|x′)||(F ◦ E)(·|x, u)) (17) By taking expectation over (x, u) one can show that Ex,u [ KL( ∫ x′∈X dP (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u)) ] is the lower bound of the batch consistency regularizer. Therefore, the above arguments imply that DTV (∫ x′∈X dP̂ (x′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) ≤ √ 2 √ R′′2 (P̂ ) +R3(P̂ ). (18) The inequality is based on the property that √ a+ √ b ≤ √ 2 √ a+ b. Equipped with the above additional results, the rest of the proof on the performance bound follows directly from the results from Lemma 2, in which here we further upper-bound DTV (∫ x′∈X dP̂ (x ′|x, u)E(·|x′)||(F ◦ E)(·|x, u) ) , when P̂ = P̂ ∗2 . A.4 PROOF OF LEMMA 3 For the first part of the proof, at any time-step t ≥ 1, for any arbitrary control action sequence {ut}T−1t=0 and for any model P̂ , consider the following decomposition of the expected cost : E[c(xt, ut) | P, x0] = cmax · ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−1, ut−1)||P̂ (·|xt−1, ut−1)) + ∫ x0:t−1∈X t t−1∏ k=1 dP (xk|xk−1, uk−1) ∫ zt∈Z ∫ z′t−1∈Z dE(z′t−1|xt−1)F (zt|z′t−1, ut−1)︸ ︷︷ ︸ dG(zt|xt−1,ut−1) ∫ xt∈X dD(xt|zt)c(xt, ut)︸ ︷︷ ︸ c̄(zt,ut) . Now consider the following cost function: E[c(xt−1, ut−1) + c(xt, ut) | P̂ , x0] for t > 2. Using the above arguments, one can express this cost as E[c(xt−1, ut−1) + c(xt, ut) | P, x0] = ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ z′t−2∈Z dE(z′t−2|xt−2) · ∫ zt−1 dF (zt−1|z′t−2, ut−2)·( c̄(zt−1, ut−1) + ∫ xt−1 dD(xt−1|zt−1) ∫ z′t−1,zt∈Z dE(z′t−1|xt−1)dF (zt|z′t−1, ut−1)c̄(zt, ut) ) + cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j)) ≤ ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) · ∫ zt−2∈Z dE(zt−2|xt−2)·∫ zt−1 dF (zt−1|zt−2, ut−2) ( c̄(zt−1, ut−1) + ∫ zt∈Z dF (zt|zt−1, ut−1)c̄(zt, ut) ) + cmax · 2∑ j=1 j · ∫ x0:t−j t−j∏ k=1 dP (xk|xk−1, uk−1)DTV(P (·|xt−j , ut−j)||P̂ (·|xt−j , ut−j)) + cmax · ∫ x0:t−2∈X t−1 t−2∏ k=1 dP (xk|xk−1, uk−1) ·DTV (∫ x′∈X dP̂ (x′|xt−2, ut−2)E(·|x′)|| ∫ z∈Z dE(z|xt−2)F (·|z, ut−2) ) . Continuing the above expansion, one can show that |E [L(U,F, c, z0) | E, x0]− L(U,P, x0)| ≤T 2 · cmax E [ 1 T T−1∑ t=0 DTV(P (·|xt, ut)||P̂ (·|xt, ut)) +DTV( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL( ∫ x′∈X dP̂ (x′|xt, ut)E(·|x′)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤ √ 2T 2 · cmax E [ 1 T T−1∑ t=0 √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + √ KL(P (·|xt, ut)||P̂ (·|xt, ut)) + KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] ≤2T 2 · cmax √√√√E[ 1 T T−1∑ t=0 3KL(P (·|xt, ut)||P̂ (·|xt, ut)) + 2KL(E(·|xt+1)||(F ◦ E)(·|xt, ut)) | P, x0 ] , where the last inequality is based on the fact that √ a+ √ b ≤ √ 2 √ a+ b and is based on Jensen’s inequality of √ (·) function. For the second part of the proof, following similar arguments from Lemma 2, one can show the following inequality for the solution of (SOC3) and (SOC2): L(U∗1 , P, c, x0) ≥E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0]− √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) =E [L(U∗1 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) − 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) ≥E [L(U∗2 , F ∗2 , c, z0) | E∗2 , x0] + √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) − 2 √ 2T 2 · cmaxU · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ) ≥L(U∗2 , P, c, x0)− 2 √ 2T 2 · cmaxU︸ ︷︷ ︸ λ2 · √ 2R′′2 (P̂ ∗ 2 ) + 3R3(P̂ ∗ 2 ), (19) where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof. A.5 PROOF OF LEMMA 4 A Recap of the Result: Let (U∗LLC, P̂ ∗LLC) be a LLC solution to (SOC-LLC) and U∗1 be a solution to (SOC1). Suppose the nominal latent state-action pair {(zt,ut)}T−1t=0 satisfies the condition: (zt,ut) ∼ N ((z∗2,t, u∗2,t), δ2I), where {(z∗2,t, u∗2,t}T−1t=0 is the optimal trajectory of problem (SOC2). Then with probability 1 − η, we have L(U∗1 , P, c, x0) ≥ L(U∗LLC, P, c, x0) − 2λLLC √ R2(P̂ ∗LLC) +RLLC(P̂ ∗ LLC) . Discussions of the effect of δ on LLC Performance: The result of this lemma shows that when the nominal state and actions are δ-close to the optimal trajectory of (SOC2), i.e., at each time step (zt,ut) is a sample from the Gaussian distribution centered at (z∗2,t, u ∗ 2,t) with standard deviation δ, then one can obtain a performance bound of LLC algorithm that is in terms of the regularization loss RLLC. To quantify the above condition, one can use Mahalanobis distance (De Maesschalck et al., 2000) to measure the distance of (zt,ut) to distribution N ((z∗2,t, u∗2,t), δ2I), i.e., we want to check for the condition: ‖(zt,ut)− (z∗2,t, u∗2,t)‖ δ ≤ ′, ∀t, for any arbitrary error tolerance ′ > 0. While we cannot verify the condition without knowing the optimal trajectory {(z∗2,t, u∗2,t)}T−1t=0 , the above condition still offers some insights in choosing the parameter δ based on the trade-off of designing nominal trajectory {(zt,ut)}T−1t=0 and optimizing RLLC. When δ is large, the low-curvature regularization imposed by the RLLC regularizer will cover a large portion of the state-action space. In the extreme case when δ →∞, RLLC can be viewed as a regularizer that enforces global linearity. Here the trade-off is that the loss RLLC is generally higher, which in turn degrades the performance bound of the LLC control algorithm in Lemma 4. On the other hand, when δ is small the low-curvature regularization in RLLC only covers a smaller region of the latent state-action space, and thus the loss associated with this term is generally lower (which provides a tighter performance bound in Lemma 4). However the performance result will only hold when (zt,ut) happens to be close to (z∗2,t, u ∗ 2,t) at each time-step t ∈ {0, . . . , T − 1}. Proof: For simplicity, we will focus on analyzing the noiseless case when the dynamics is deterministic (i.e., Σw = 0). Extending the following analysis for the case of non-deterministic dynamics should be straight-forward. First, consider any arbitrary latent state-action pair (z, u), such that the corresponding nominal state-action pair (z,u) is constructed by z = z− δz, u = u− δu, where (δz, δu) is sampled from the Gaussian distribution N (0, δ2I). (The random vectors are denoted as (δz′, δu′)) By the two-tailed Bernstein’s inequality (Murphy, 2012), for any arbitrarily given η ∈ (0, 1] one has the following inequality with probability 1− η: |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| ≤ √ 2 log(2/η) √ V(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)] + ∣∣E(δz′,δu′)∼N (0,δ2I)[fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)]∣∣ ≤(1 + √ 2 log(2/η)) ( E(δz′,δu′)∼N (0,δ2I) [ ‖fZ(z,u) +A(z,u)δz′ +B(z,u)δu′ − fZ(z, u)‖2 ]︸ ︷︷ ︸ RLLC(P̂ |z,u) )1/2 . The second inequality is due to the basic fact that variance is less than second-order moment of a random variable. On the other hand, at each time step t ∈ {0, . . . , T −1} by the Lipschitz property of the immediate cost, the value function Vt(z) = minUt:T−1 E [ cT (zT ) + ∑T−1 τ=t cτ (zτ , uτ ) | zt = z ] is also Lipchitz with constant (T − t+ 1)clip. Using the Lipschitz property of Vt+1, for any (z, u) and (δz, δu), such that (z,u) = (z − δz, u− δu), one has the following property: |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z, u))| ≤(T − t)clip · |fZ(z,u) +A(z,u)δz +B(z,u)δu− fZ(z, u)| , (20) Therefore, at any arbitrary state-action pair (z̃, ũ), for z = z − δz, and u = ũ− δu with Gaussian sample (δz, δu) ∼ N (0, δ2I), the following inequality on the value function holds w.p. 1− η: Vt+1(fZ(z̃, ũ)) ≥ Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ), which further implies ct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) ≥ct(z̃, ũ) + Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− (T − t)clip(1 + √ 2 log(2/η)) · √ RLLC(P̂ |z̃, ũ), Now let ũ∗ be the optimal control w.r.t. Bellman operator Tt[Vt+1](z̃) at any latent state z̃. Based on the assumption of this lemma, at each state z̃ the nominal latent state-action pair (z,u) is generated by perturbing (z̃, ũ∗) with Gaussian sample (δz, δu) ∼ N (0, δ2I) that is in form of z = z̃ − δz, u = ũ− δu. Then by the above arguments the following chain of inequalities holds w.p. 1− η: Tt[Vt+1](z̃) := min ũ ct(z̃, ũ) + Vt+1(fZ(z̃, ũ)) =ct(z̃, ũ ∗) + Vt+1(fZ(z̃, ũ ∗)) ≥ct(z̃, ũ∗) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − |Vt+1(z ′ +A(z,u)δz +B(z,u)δu)− Vt+1(fZ(z̃, ũ∗))| ≥ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u) ≥min δu ct(z̃,u + δu) + Vt+1(fZ(z,u) +A(z,u)δz +B(z,u)δu) − (T − t)clip(1 + √ 2 log(2/η)) √ max z,u RLLC(P̂ |z, u) (21) Recall the LLC loss function is given by RLLC(P̂ ) = Ex,u [ E [ RLLC(P̂ |z, u) | z ] | E ] . Also consider the Bellman operator w.r.t. latent SOC: Tt[V ](z) = minu ct(z, u) + V (fZ(z, u)), and the Bellman operator w.r.t. LLC: Tt,LLC[V ](z) = minδu ct(z, δu+ u) + V (fZ(z,u) +A(z,u)δz + B(z,u)δu). Utilizing these definitions, the inequality in (21) can be further expressed as Tt[Vt+1](z̃) ≥Tt,LLC[Vt+1](z̃)− (T − t)clipcmax(1 + √ 2 log(2/η)) √ UX √ RLLC(P̂ ), (22) This inequality is due to the fact that all latent states are generated by the encoding observations, i.e., z ∼ E(·|x), and thus by following analogous arguments as in the proof of Lemma 1, one has max z,u RLLC(P̂ |z, u) ≤ UXEx,u [ E [ RLLC(P̂ |z, u) | z ] | E ] = UXRLLC(P̂ ). Therefore, based on the dynamic programming result that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in Bertsekas (1995)), the above inequality implies the following bound in the value function, w.p. 1− η: min U,P̂ L(U,F, c, z0) ≥L(U∗LLC, P̂ ∗LLC, c, z0)− T−1∑ t=1 (T − t) · clipcmax · T · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC) ≥L(U∗LLC, P̂ ∗LLC, c, z0)− T 2 · clipcmax · (1 + √ 2 log(2T/η)) · √ UX · √ RLLC(P̂ ∗LLC). (23) Notice that here we replace η in the result in (22) with η/T . In order to prove (23), we utilize (22) for each t ∈ {0, . . . , T − 1}, and this replacement is the result of applying the Union Probability bound (Murphy, 2012) (to ensure (23) holds with probability 1− η). Therefore the proof is completed by combining the above result with that in Lemma 3. B THE LATENT SPACE ILQR ALGORITHM B.1 PLANNING IN THE LATENT SPACE (HIGH-LEVEL DESCRIPTION) We follow the same control scheme as in Banijamali et al. (2018). Namely, we use the iLQR (Li & Todorov, 2004) solver to plan in the latent space. Given a start observation xstart and a goal observation xgoal, corresponding to underlying states {sstart, sgoal}, we encode the observations to retrieve zstart and zgoal. Then, the procedure goes as follows: we initialize a random trajectory (sequence of actions), feed it to the iLQR solver and apply the first action from the trajectory the solver outputs. We observe the next observation returned from the system (closed-loop control), and feed the updated trajectory to the iLQR solver. This procedure continues until the it reaches the end of the problem horizon. We use a receding window approach, where at every planning step the solver only optimizes for a fixed length of actions sequence, independent of the problem horizon. B.2 DETAILS ABOUT ILQR IN THE LATENT SPACE Consider the latent state SOC problem min U E [ cT (zT ) + T−1∑ t=0 ct(zt, ut) | z0 ] . At each time instance t ∈ {0, . . . , T} the value function of this problem is given by VT (z) = cT (z), Vt(z) = min Ut:T−1
1. What is the focus of the paper in terms of the problem it addresses? 2. What are the desired properties of the latent representation for LLC algorithms according to the authors? 3. How does the proposed learning framework satisfy these desired properties? 4. Are there any concerns or suggestions regarding the clarity and readability of the equations in section 4.2? 5. How do the methodology and insights presented in the paper compare to prior works in the field? 6. Can you explain the significance of the improvements shown in the experiments compared to competing methods? 7. How do the ablation studies validate the different components of the final loss? 8. In what ways might this paper inspire future research in model-based control and planning?
Review
Review This paper considers from a high level the problem of learning a latent representation of high dimensional observations with underlying dynamics for control. The authors specifically describe some desiredata for latent representations for LLC algorithms. The authors rigorously construct a learning framework that can satisfy the desiredata and then show how this can be tractably instantiated. The paper overall is clear, however there is many equations in 4.2 with heavy subscritping making it sometimes difficult to read. The authours could attempt to better highlight the more critical parts of their propositins (e.g. eq. 8/9). The methodology and insights appear novel and well motivated, however I am not familiar with many of the prior work. The experiments compared to competing methods show substantial improvement. The authors also motivate well why these improvements over the existing methods should occur and provide ablations to validate all the components of the final loss. Overall the paper appears very solid and may motivate insights and research in more complex model based control and planning
ICLR
Title Hadamard Product for Low-rank Bilinear Pooling Abstract Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. N/A Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. 1 INTRODUCTION Bilinear models (Tenenbaum & Freeman, 2000) provide richer representations than linear models. To exploit this advantage, fully-connected layers in neural networks can be replaced with bilinear pooling. The outer product of two vectors (or Kroneker product for matrices) is involved in bilinear pooling, as a result of this, all pairwise interactions among given features are considered. Recently, a successful application of this technique is used for fine-grained visual recognition (Lin et al., 2015). However, bilinear pooling produces a high-dimensional feature of quadratic expansion, which may constrain a model structure and computational resources. For example, an outer product of two feature vectors, both of which have 1K-dimensionality, produces a million-dimensional feature vector. Therefore, for classification problems, the choice of the number of target classes is severely constrained, because the number of parameters for a standard linear classifier is determined by multiplication of the size of the high-dimensional feature vector and the number of target classes. Compact bilinear pooling (Gao et al., 2016) reduces the quadratic expansion of dimensionality by two orders of magnitude, retaining the performance of the full bilinear pooling. This approximation uses sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013), which utilizes an useful property that Ψ(x⊗ y, h, s) = Ψ(x, h, s) ∗Ψ(y, h, s), which means the projection of outer product of two vectors is the convolution of two projected vectors. Here, Ψ is the proposed projection function, and, h and s are randomly sampled parameters by the algorithm. Nevertheless, compact bilinear pooling embraces two shortcomings. One comes from the sampling approach. Compact bilinear pooling relies on a favorable property, E[〈Ψ(x, h, s),Ψ(y, h, s)〉] = 〈x, y〉, which provides a basis to use projected features instead of original features. Yet, calculating the exact expectation is computationally intractable, so, the random parameters, h and s are fixed during training and evaluation. This practical choice leads to the second. The projected dimension of compact bilinear pooling should be large enough to minimize the bias from the fixed parameters. Practical choices are 10K and 16K for 512 and 4096-dimensional inputs, respectively (Gao et al., 2016; Fukui et al., 2016). Though, these compacted dimensions are reduced ones by two orders of magnitude compared with full bilinear pooling, such high-dimensional features could be a bottleneck for computationally complex models. We propose low-rank bilinear pooling using Hadamard product (element-wise multiplication), which is commonly used in various scientific computing frameworks as one of tensor operations. The proposed method factors a three-dimensional weight tensor for bilinear pooling into three twodimensional weight matrices, which enforces the rank of the weight tensor to be low-rank. As a result, two input feature vectors linearly projected by two weight matrices, respectively, are computed by Hadamard product, then, followed by a linear projection using the third weight matrix. For example, the projected vector z is represented by WTz (W T xx ◦WTyy), where ◦ denotes Hadamard product. We also explore to add non-linearity using non-linear activation functions into the low-rank bilinear pooling, and shortcut connections inspired by deep residual learning (He et al., 2016). Then, we show that it becomes a simple baseline model (Antol et al., 2015) or one-learning block of Multimodal Residual Networks (Kim et al., 2016b) as a low-rank bilinear model, yet, this interpretation has not be done. Our contributions are as follows: First, we propose low-rank bilinear pooling to approximate full bilinear pooling to substitute compact bilinear pooling. Second, Multimodal Low-rank Bilinear Attention Networks (MLB) having an efficient attention mechanism using low-rank bilinear pooling is proposed for visual question-answering tasks. MLB achieves a new state-of-the-art performance, and has a better parsimonious property. Finally, ablation studies to explore alternative choices, e.g. network depth, non-linear functions, and shortcut connections, are conducted. 2 LOW-RANK BILINEAR MODEL Bilinear models use a quadratic expansion of linear transformation considering every pair of features. fi = N∑ j=1 M∑ k=1 wijkxjyk + bi = x TWiy + bi (1) where x and y are input vectors, Wi ∈ RN×M is a weight matrix for the output fi, and bi is a bias for the output fi. Notice that the number of parameters is L× (N ×M + 1) including a bias vector b, where L is the number of output features. Pirsiavash et al. (2009) suggest a low-rank bilinear method to reduce the rank of the weight matrix Wi to have less number of parameters for regularization. They rewrite the weight matrix as Wi = UiV T i where Ui ∈ RN×d and Vi ∈ RM×d, which imposes a restriction on the rank of Wi to be at most d ≤ min(N,M). Based on this idea, fi can be rewritten as follows: fi = x TWiy + bi = x TUiV T i y + bi = 1 T (UTi x ◦VTi y) + bi (2) where 1 ∈ Rd denotes a column vector of ones, and ◦ denotes Hadamard product. Still, we need two third-order tensors, U and V, for a feature vector f , whose elements are {fi}. To reduce the order of the weight tensors by one, we replace 1 with P ∈ Rd×c and bi with b ∈ Rc, then, redefine as U ∈ RN×d and V ∈ RM×d to get a projected feature vector f ∈ Rc. Then, we get: f = PT (UTx ◦VTy) + b (3) where d and c are hyperparameters to decide the dimension of joint embeddings and the output dimension of low-rank bilinear models, respectively. 3 LOW-RANK BILINEAR POOLING A low-rank bilinear model in Equation 3 can be implemented using two linear mappings without biases for embedding two input vectors, Hadamard product to learn joint representations in a multiplicative way, and a linear mapping with a bias to project the joint representations into an output vector for a given output dimension. Then, we use this structure as a pooling method for deep neural networks. Now, we discuss possible variations of low-rank bilinear pooling based on this model inspired by studies of neural networks. 3.1 FULL MODEL In Equation 3, linear projections, U and V , can have their own bias vectors. As a result, linear models for each input vectors, x and y, are integrated in an additive form, called as full model for linear regression in statistics: f = PT ( (UTx + bx) ◦ (VTy + by) ) + b = PT (UTx ◦VTy + U′Tx + V′Ty) + b′. (4) Here, U′T = diag(by) ·UT , V′T = diag(bx) ·VT , and b′ = b + PT (bx ◦ by). 3.2 NONLINEAR ACTIVATION Applying non-linear activation functions may help to increase representative capacity of model. The first candidate is to apply non-linear activation functions right after linear mappings for input vectors. f = PT ( σ(UTx) ◦ σ(VTy) ) + b (5) where σ denotes an arbitrary non-linear activation function, which maps any real values into a finite interval, e.g. sigmoid or tanh. If two inputs come from different modalities, statistics of two inputs may be quite different from each other, which may result an interference. Since the gradient with respect to each input is directly dependent on the other input in Hadamard product of two inputs. Additional applying an activation function after the Hadamard product is not appropriate, since activation functions doubly appear in calculating gradients. However, applying the activation function only after the Hadamard product would be alternative choice (We explore this option in Section 5) as follows: f = PTσ ( UTx ◦VTy ) + b. (6) Note that using the activation function in low-rank bilinear pooling can be found in an implementation of simple baseline for the VQA dataset (Antol et al., 2015) without an interpretation of low-rank bilinear pooling. However, notably, Wu et al. (2016c) studied learning behavior of multiplicative integration in RNNs with discussions and empirical evidences. 3.3 SHORTCUT CONNECTION When we apply two previous techniques, full model and non-linear activation, linear models of two inputs are nested by the non-linear activation functions. To avoid this unfortunate situation, we add shortcut connections as explored in residual learning (He et al., 2016). f = PT ( σ(UTx) ◦ σ(VTy) ) + hx(x) + hy(y) + b (7) where hx and hy are shortcut mappings. For linear projection, the shortcut mappings are linear mappings. Notice that this formulation is a generalized form of the one-block layered MRN (Kim et al., 2016b). Though, the shortcut connections are not used in our proposed model, as explained in Section 6. 4 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS In this section, we apply low-rank bilinear pooling to propose an efficient attention mechanism for visual question-answering tasks, based on the interpretation of previous section. We assumed that inputs are a question embedding vector q and a set of visual feature vectors F over S × S lattice space. 4.1 LOW-RANK BILINEAR POOLING IN ATTENTION MECHANISM Attention mechanism uses an attention probability distribution α over S × S lattice space. Here, using low-rank bilinear pooling, α is defined as α = softmax ( PTα ( σ(UTqq · 1T ) ◦ σ(VTFFT ) )) (8) where α ∈ RG×S2 , Pα ∈ Rd×G, σ is a hyperbolic tangent function, Uq ∈ RN×d, q ∈ RN , 1 ∈ RS2 , VF ∈ RM×d, and F ∈ RS 2×M . If G > 1, multiple glimpses are explicitly expressed as in Fukui et al. (2016), conceptually similar to Jaderberg et al. (2015). And, the softmax function applies to each row vector of α. The bias terms are omitted for simplicity. 4.2 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS Attended visual feature v̂ is a linear combination of Fi with coefficients αg,i. Each attention probability distribution αg is for a glimpse g. For G > 1, v̂ is the concatenation of resulting vectors v̂g as v̂ = Gn g=1 S2∑ s=1 αg,sFs (9) where f denotes concatenation of vectors. The posterior probability distribution is an output of a softmax function, whose input is the result of another low-rank bilinear pooling of q and v̂ as p(a|q,F; Θ) = softmax ( PTo ( σ(WTqq) ◦ σ(VTv̂ v̂) )) (10) â = arg max a∈Ω p(a|q,F; Θ) (11) where â denotes a predicted answer, Ω is a set of candidate answers and Θ is an aggregation of entire model parameters. 5 EXPERIMENTS In this section, we conduct six experiments to select the proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB). Each experiment controls other factors except one factor to assess the effect on accuracies. Based on MRN (Kim et al., 2016b), we start our assessments with an initial option of G = 1 and shortcut connections of MRN, called as Multimodal Attention Residual Networks (MARN). Notice that we use one embeddings for each visual feature for better performance, based on our preliminary experiment (not shown). We attribute this choice to the attention mechanism for visual features, which provides more capacity to learn visual features. We use the same hyper-parameters of MRN (Kim et al., 2016b), without any explicit mention of this. The VQA dataset (Antol et al., 2015) is used as a primary dataset, and, for data augmentation, question-answering annotations of Visual Genome (Krishna et al., 2016) are used. Validation is performed on the VQA test-dev split, and model comparison is based on the results of the VQA test-standard split. For the comprehensive reviews of VQA tasks, please refer to Wu et al. (2016a) and Kafle & Kanan (2016a). The details about preprocessing, question and vision embedding, and hyperparameters used in our experiments are described in Appendix A. The source code for the experiments is available in Github repository1. Number of Learning Blocks Kim et al. (2016b) argue that three-block layered MRN shows the best performance among one to four-block layered models, taking advantage of residual learning. However, we speculate that an introduction of attention mechanism makes deep networks hard to optimize. Therefore, we explore the number of learning blocks of MARN, which have an attention mechanism using low-rank bilinear pooling. Number of Glimpses Fukui et al. (2016) show that the attention mechanism of two glimpses was an optimal choice. In a similar way, we assess one, two, and four-glimpse models. 1https://github.com/jnhwkim/MulLowBiVQA Non-Linearity We assess three options applying non-linearity on low-rank bilinear pooling, vanilla, before Hadamard product as in Equation 5, and after Hadamard product as in Equation 6. Answer Sampling VQA (Antol et al., 2015) dataset has ten answers from unique persons for each question, while Visual Genome (Krishna et al., 2016) dataset has a single answer for each question. Since difficult or ambiguous questions may have divided answers, the probabilistic sampling from the distribution of answers can be utilized to optimize for the multiple answers. An instance 2 can be found in Fukui et al. (2016). We simplify the procedure as follows: p(a1) = { |a1|/Σi|ai|, if |a1| ≥ 3 0, otherwise (12) p(a0) = 1− p(a1) (13) where |ai| denotes the number of unique answer ai in a set of multiple answers, a0 denotes a mode, which is the most frequent answer, and a1 denotes the secondly most frequent answer. We define the divided answers as having at least three answers which are the secondly frequent one, for the evaluation metric of VQA (Antol et al., 2015), accuracy(ak) = min (|ak|/3, 1) . (14) 2https://github.com/akirafukui/vqa-mcb/blob/5fea8/train/multi_att_2_ glove/vqa_data_provider_layer.py#L130 The rate of the divided answers is approximately 16.40%, and only 0.23% of questions have more than two divided answers in VQA dataset. We assume that it eases the difficulty of convergence without severe degradation of performance. Shortcut Connection The contribution of shortcut connections for residual learning is explored based on the observation of the competitive performance of single-block layered model. Since the usefulness of shortcut connections is linked to the network depth (He et al., 2016). Data Augmentation The data augmentation with Visual Genome (Krishna et al., 2016) question answer annotations is explored. Visual Genome (Krishna et al., 2016) originally provides 1.7 Million visual question answer annotations. After aligning to VQA, the valid number of question-answering pairs for training is 837,298, which is for distinct 99,280 images. 6 RESULTS The six experiments are conducted sequentially. Each experiment determines experimental variables one by one. Refer to Table 1, which has six sectors divided by mid-rules. 6.1 SIX EXPERIMENT RESULTS Number of Learning Blocks Though, MRN (Kim et al., 2016b) has the three-block layered architecture, MARN shows the best performance with two-block layered models (63.92%). For the multiple glimpse models in the next experiment, we choose one-block layered model for its simplicity to extend, and competitive performance (63.79%). Number of Glimpses Compared with the results of Fukui et al. (2016), four-glimpse MARN (64.61%) is better than other comparative models. However, for a parsimonious choice, two-glimpse MARN (64.53%) is chosen for later experiments. We speculate that multiple glimpses are one of key factors for the competitive performance of MCB (Fukui et al., 2016), based on a large margin in accuracy, compared with one-glimpse MARN (63.79%). Non-Linearity The results confirm that activation functions are useful to improve performances. Surprisingly, there is no empirical difference between two options, before-Hadamard product and after-Hadamard product. This result may build a bridge to relate with studies on multiplicative integration with recurrent neural networks (Wu et al., 2016c). Answer Sampling Sampled answers (64.80%) result better performance than mode answers (64.53%). It confirms that the distribution of answers from annotators can be used to improve the performance. However, the number of multiple answers is usually limited due to the cost of data collection. Shortcut Connection Though, MRN (Kim et al., 2016b) effectively uses shortcut connections to improve model performance, one-block layered MARN shows better performance without the shortcut connection. In other words, the residual learning is not used in our proposed model, MLB. It seems that there is a trade-off between introducing attention mechanism and residual learning. We leave a careful study on this trade-off for future work. Data Augmentation Data augmentation using Visual Genome (Krishna et al., 2016) question answer annotations significantly improves the performance by 0.76% in accuracy for VQA test-dev split. Especially, the accuracy of others (ETC)-type answers is notably improved from the data augmentation. 6.2 COMPARISON WITH STATE-OF-THE-ART The comparison with other single models on VQA test-standard is shown in Table 2. The overall accuracy of our model is approximately 1.9% above the next best model (Noh & Han, 2016) on the Open-Ended task of VQA. The major improvements are from yes-or-no (Y/N) and others (ETC)type answers. In Table 3, we also report the accuracy of our ensemble model to compare with other ensemble models on VQA test-standard, which won 1st to 5th places in VQA Challenge 20163. We beat the previous state-of-the-art with a margin of 0.42%. 7 RELATED WORKS MRN (Kim et al., 2016b) proposes multimodal residual learning with Hadamard product of low-rank bilinear pooling. However, their utilization of low-rank bilinear pooling is limited to joint residual mapping function for multimodal residual learning. Higher-order Boltzmann Machines (Memisevic & Hinton, 2007; 2010) use Hadamard product to capture the interactions of input, output, and hidden representations for energy function. Wu et al. (2016c) propose the recurrent neural networks using Hadamard product to integrate multiplicative interactions among hidden representations in the model. For details of these related works, please refer to Appendix D. 3http://visualqa.org/challenge.html Yet, compact bilinear pooling or multimodal compact bilinear pooling (Gao et al., 2016; Fukui et al., 2016) is worth to discuss and carefully compare with our method. 7.1 COMPACT BILINEAR POOLING Compact bilinear pooling (Gao et al., 2016) approximates full bilinear pooling using a samplingbased computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013): Ψ(x⊗ y, h, s) = Ψ(x, h, s) ∗Ψ(y, h, s) (15) = FFT−1(FFT(Ψ(x, h, s) ◦ FFT(Ψ(y, h, s)) (16) where ⊗ denotes outer product, ∗ denotes convolution, Ψ(v, h, s)i := ∑ j:hj=i sj · vj , FFT denotes Fast Fourier Transform, d denotes an output dimension, x, y, h, s ∈ Rn, x and y are inputs, and h and s are random variables. hi is sampled from {1, ..., d}, and si is sampled from {−1, 1}, then, both random variables are fixed for further usage. Even if the dimensions of x and y are different from each other, it can be used for multimodal learning (Fukui et al., 2016). Similarly to Equation 1, compact bilinear pooling can be described as follows: fi = x TWiy (17) whereWijk = sijkwijk if sijk is sampled from {−1, 1},wijk is sampled from {Pi1,Pi2, . . . ,Pid}, and the compact bilinear pooling is followed by a fully connected layer P ∈ R|Ω|×d. Then, this method can be formulated as a hashing trick (Weinberger et al., 2009; Chen et al., 2015) to share randomly chosen bilinear weights using d parameters for a output value, in a way that a single parameter is shared by NM/d bilinear terms in expectation, with the variance of NM(d − 1)/d2 (See Appendix B). In comparison with our method, their method approximates a three-dimensional weight tensor in bilinear pooling with a two-dimensional matrix P, which is larger than the concatenation of three two-dimensional matrices for low-rank bilinear pooling. The ratio of the number of parameters for a single output to the total number of parameters for |Ω| outputs is d/d|Ω| = 1/|Ω| (Fukui et al., 2016), vs. d(N +M + 1)/d(N +M + |Ω|) = (N +M + 1)/(N +M + |Ω|) ≈ 2/3 (ours), since our method uses a three-way factorization. Hence, more parameters are allocated to each bilinear approximation than compact bilinear pooling does, effectively managing overall parameters guided by back-propagation algorithm. MCB (Fukui et al., 2016), which uses compact bilinear pooling for multimodal tasks, needs to set the dimension of output d to 16K, to reduce the bias induced by the fixed random variables h and s. As a result, the majority of model parameters (16K × 3K = 48M) are concentrated on the last fully connected layer, which makes a fan-out structure. So, the total number of parameters of MCB is highly sensitive to the number of classes, which is approximately 69.2M for MCB+att, and 70.5M for MCB+att+GloVe. Yet, the total number of parameters of our proposed model (MLB) is 51.9M, which is more robust to the number of classes having d = 1.2K, which has a similar role in model architecture. 8 CONCLUSIONS We suggest a low-rank bilinear pooling method to replace compact bilinear pooling, which has a fan-out structure, and needs complex computations. Low-rank bilinear pooling has a flexible structure using linear mapping and Hadamard product, and a better parsimonious property, compared with compact bilinear pooling. We achieve new state-of-the-art results on the VQA dataset using a similar architecture of Fukui et al. (2016), replacing compact bilinear pooling with low-rank bilinear pooling. We believe our method could be applicable to other bilinear learning tasks. ACKNOWLEDGMENTS The authors would like to thank Patrick Emaase for helpful comments and editing. Also, we are thankful to anonymous reviewers who provided comments to improve this paper. This work was supported by NAVER LABS Corp. & NAVER Corp. and partly by the Korea government (IITP-R0126-16-1072-SW.StarLab, KEIT10044009-HRI.MESSI, KEIT-10060086-RISF, ADD-UD130070ID-BMRR). The part of computing resources used in this study was generously shared by Standigm Inc. Appendix A EXPERIMENT DETAILS A.1 PREPROCESSING We follow the preprocessing procedure of Kim et al. (2016b). Here, we remark some details of it, and changes. A.1.1 QUESTION EMBEDDING The 90.45% of questions for the 2K-most frequent answers are used. The vocabulary size of questions is 15,031. GRU (Cho et al., 2014) is used for question embedding. Based on earlier studies (Noh et al., 2016; Kim et al., 2016b), a word embedding matrix and a GRU are initialized with Skip-thought Vector pre-trained model (Kiros et al., 2015). As a result, question vectors have 2,400 dimensions. For efficient computation of variable-length questions, Kim et al. (2016a) is used for the GRU. Moreover, for regularization, Bayesian Dropout (Gal, 2015) which is implemented in Léonard et al. (2015) is applied while training. A.2 VISION EMBEDDING ResNet-152 networks (He et al., 2016) are used for feature extraction. The dimensionality of an input image is 3× 448× 448. The outputs of the last convolution layer is used, which have 2, 048× 14× 14 dimensions. A.3 HYPERPARAMETERS The hyperparameters used in MLB of Table 2 are described in Table 4. The batch size is 100, and the number of iterations is fixed to 250K. For data augmented models, a simplified early stopping is used, starting from 250K to 350K-iteration for every 25K iterations (250K, 275K, 300K, 325K, and 350K; at most five points) to avoid exhaustive submissions to VQA test-dev evaluation server. RMSProp (Tieleman & Hinton, 2012) is used for optimization. Though, the size of joint embedding size d is borrowed from Kim et al. (2016b), a grid search on d confirms this choice in our model as shown in Table 5. A.4 MODEL SCHEMA Figure 1 shows a schematic diagram of MLB, where ◦ denotes Hadamard product, and Σ denotes a linear combination of visual feature vectors using coefficients, which is the output of softmax function. If G > 1, the softmax function is applied to each row vectors of an output matrix (Equation 8), and we concatenate the resulting vectors of the G linear combinations (Equation 9). A.5 ENSEMBLE OF SEVEN MODELS The test-dev results for individual models consisting of our ensemble model is presented in Table 6. MODEL GLIMPSE ALL Y/N NUM ETC LinearLinear B UNDERSTANDING OF MULTIMODAL COMPACT BILINEAR POOLING In this section, the algorithm of multimodal compact bilinear pooling (MCB) (Gao et al., 2016; Fukui et al., 2016) is described as a kind of hashing tick (Chen et al., 2015). x ∈ Rnx and y ∈ Rny are the given inputs, Φ(x,y) ∈ Rd is the output. Random variables hx ∈ Nnx and hy ∈ Nny are uniformly sampled from {1, . . . , d}, and sx ∈ Znx and sy ∈ Zny are uniformly sampled from {−1, 1}. Then, Count Sketch projection function Ψ (Charikar et al., 2002) projects x and y to intermediate representations Ψ(x,hx, sx) ∈ Rd and Ψ(y,hy, sy) ∈ Rd, which is defined as: Ψ(v,h, s)i := ∑ j:hj=i sj · vj (18) Notice that both h and s remain as constants after initialization (Fukui et al., 2016). The probability of hxj = i and hyj = i for the given j is 1/d2. Hence, the expected number of bilinear terms in Ψ(x,hx, sx)iΨ(y,hy, sy)i is (nxny)/d2. Since, the output Φ(x,y) is a result of circular convolution of Ψ(x,hx, sx) and Ψ(y,hy, sy), the expected number of bilinear terms in Φ(x,y)i is (nxny)/d. Likewise, the probability of that a bilinear term is allocated in Φ(x,y)i is 1/d. The probability distribution of the number of bilinear terms in Φ(x,y)i follows a multinomial distribution, whose mean is (nxny)/d and variance is (nxny)(d− 1)/d2. Linear projection after the multimodal compact bilinear pooling provides weights on the bilinear terms, in a way that a shared weight is assigned to Φ(x,y)i, which has (nxny)/d bilinear terms in expectation, though each bilinear term can have a different sign induced by both sx and sy . HashedNets (Chen et al., 2015) propose a method to compress neural networks using a low-cost hashing function (Weinberger et al., 2009), which is the same function of Ψ(v,h, s). They randomly group a portion of connections in neural networks to share a single weight. We speculate that multimodal compact bilinear pooling uses the hashing tick to reduce the number of full bilinear weights with the rate of d/(nxny). However, this approximation is limited to two-way interaction, compared with three-way factorization in our method. C REPLACEMENT OF LOW-RANK BILINEAR POOLING For the explicit comparison with compact bilinear pooling, we explicitly substitute compact bilinear pooling for low-rank bilinear pooling to control everything else, which means that the rest of the model architecture is exactly the same. According to Fukui et al. (2016), we use MCB followed by Signed Square Root, L2-Normalization, Dropout (p=0.1), and linear projection from 16,000-dimension to the target dimension. Also, Dropout (p=0.3) for a question embedding vector. Note that an overall architecture for multimodal learning of both is the same. Experimental details are referenced from the implementation 4 of Fukui et al. (2016). For test-dev split, our version of MCB gets 61.48% for overall accuracy (yes/no: 82.48%, number: 37.06%, and other: 49.07%) vs. 65.08% (ours, MLB in Table 1). Additionally, if the nonlinearity in getting attention distributions is increased as the original MCB does using ReLU, we get 62.11% for overall accuracy (yes/no: 82.55%, number: 37.18%, and other: 50.30%), which is still the below of our performance 5. We do not see it as a decisive evidence of the better performance of MLB, but as a reference (the comparison of test-dev results may be also unfair.), since an optimal architecture and hyperparameters may be required for each method. 4https://github.com/akirafukui/vqa-mcb 5Our version of MCB definition can be found in https://github.com/jnhwkim/MulLowBiVQA/ blob/master/netdef/MCB.lua D RELATED WORKS D.1 MULTIMODAL RESIDUAL NETWORKS MRN (Kim et al., 2016b) is an implicit attentional model using multimodal residual learning with Hadamard product which does not have any explicit attention mechanism. F (k)(q,v) = σ(W(k)q q) ◦ σ(W (k) 2 σ(W (k) 1 v)) (19) HL(q,v) = Wq′q + L∑ l=1 WF(l)F (l)(Hl−1,v) (20) where W∗ are parameter matrices, L is the number of learning blocks, H0 = q, Wq′ = ΠLl=1W (l) q′ , and WF(l) = Π L m=l+1W (m) q′ . Notice that these equations can be generalized by Equation 7. However, an explicit attention mechanism allows the use of lower-level visual features than fully-connected layers, and, more importantly, spatially selective learning. Recent state-of-the-art methods use a variant of an explicit attention mechanism in their models (Lu et al., 2016; Noh & Han, 2016; Fukui et al., 2016). Note that shortcut connections of MRN are not used in the proposed Multimodal Low-rank Bilinear (MLB) model. Since, it does not have any performance gain due to not stacking multiple layers in MLB. We leave the study of residual learning for MLB for future work, which may leverage the excellency of bilinear models as suggested in Wu et al. (2016a). D.2 HIGHER-ORDER BOLTZMANN MACHINES A similar model can be found in a study of Higher-Order Boltzmann Machines (Memisevic & Hinton, 2007; 2010). They suggest a factoring method for the three-way energy function to capture correlations among input, output, and hidden representations. −E(y,h;x) = ∑ f (∑ i xiw x if )(∑ j yjw y jf )(∑ k hkw h kf ) + ∑ k whkhk + ∑ j wyj yj = ( xTWx ◦ yTWy ◦ hTWh ) 1+ hTwh + yTwy (21) Setting aside of bias terms, the I × J ×K parameter tensor of unfactored Higher-Order Boltzmann Machines is replaced with three matrices, Wx ∈ RI×F , Wy ∈ RJ×F , and Wh ∈ RK×F . D.3 MULTIPLICATIVE INTEGRATION WITH RECURRENT NEURAL NETWORKS Most of recurrent neural networks, including vanilla RNNs, Long Short Term Memory networks (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (Cho et al., 2014), share a common expression as follows: φ(Wx + Uh + b) (22) where φ is a non-linear function, W ∈ Rd×n, x ∈ Rn, U ∈ Rd×m, h ∈ Rm, and b ∈ Rd is a bias vector. Note that, usually, x is an input state vector and h is an hidden state vector in recurrent neural networks. Wu et al. (2016c) propose a new design to replace the additive expression with a multiplicative expression using Hadamard product as φ(Wx ◦Uh + b). (23) Moreover, a general formulation of this multiplicative integration can be described as φ(α ◦Wx ◦Uh + Wx ◦ β1 + Uh ◦ β2 + b) (24) which is reminiscent of full model in Section 3.1.
1. What are the main contributions of the paper regarding low-rank bilinear pooling? 2. What are the strengths of the paper, particularly in providing new insights into element-wise multiplication? 3. What are the weaknesses of the paper, especially regarding the comparison with compact bilinear pooling and statistical significance? 4. How does the reduction in parameters help experimentally? 5. What are the differences between MRN, MARN, and MLB? 6. Are there any concerns or suggestions regarding the presentation of the paper, such as the caption for Table 1?
Review
Review Summary: The paper presents low-rank bilinear pooling that uses Hadamard product (commonly known as element-wise multiplication). The paper implements low-rank bilinear pooling on an existing model (Kim et al., 2016b) and builds a model for Visual Question Answering (VQA) that outperforms the current state-of-art by 0.42%. The paper presents various ablation studies of the new VQA model they built. Strengths: 1. The paper presents new insights into element-wise multiplication operation which has been previously used in VQA literature (such as Antol et al., ICCV 2015) without insights on why it should work. 2. The paper presents a new model for the task of VQA that beats the current state-of-art by 0.42%. However, I have concerns about the statistical significance of the performance (see weaknesses below). 3. The various design choices made in model development have been experimentally verified. Weaknesses/Suggestions: 1. When authors explicitly (keeping rest of the model architecture same) compared low-rank bilinear pooling with compact bilinear pooling, they found that low-rank bilinear pooling performs worse. Hence, it could not be experimentally verified that low-rank bilinear pooling is better in performance than compact bilinear pooling (at least for the task of VQA). 2. The authors argue that low-rank bilinear pooling uses 25% less parameters than compact bilinear pooling. So, could the authors please explain how does the reduction in number of parameters help experimentally? Does the training time of the model reduce significantly? Can we train the model with less data? 3. One of the contributions of the paper is that the proposed model outperforms the current state-of-art on VQA by 0.42%. However, I am skeptical that the performance of the proposed model is statistically significantly better than the current state-of-art. 4. I would like the authors to explicitly mention the differences between MRN, MARN and MLB. It is not very clear from reading the paper. 5. In the caption for Table 1, fix the following: “have not” -> “have no” Review Summary: I like the insights about low-rank bilinear pooling using Hadamard product (element-wise multiplication) presented in the paper. However, it could not be justified that low-rank bilinear pooling leads to better performance than compact biliear pooling. It does lead to reduction in number of parameters but it is not clear how much that helps experimentally. So, to be more convinced I would like the authors to provide experimental justification of why low-rank bilinear pooling is better than other forms of pooling.
ICLR
Title Hadamard Product for Low-rank Bilinear Pooling Abstract Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. N/A Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. 1 INTRODUCTION Bilinear models (Tenenbaum & Freeman, 2000) provide richer representations than linear models. To exploit this advantage, fully-connected layers in neural networks can be replaced with bilinear pooling. The outer product of two vectors (or Kroneker product for matrices) is involved in bilinear pooling, as a result of this, all pairwise interactions among given features are considered. Recently, a successful application of this technique is used for fine-grained visual recognition (Lin et al., 2015). However, bilinear pooling produces a high-dimensional feature of quadratic expansion, which may constrain a model structure and computational resources. For example, an outer product of two feature vectors, both of which have 1K-dimensionality, produces a million-dimensional feature vector. Therefore, for classification problems, the choice of the number of target classes is severely constrained, because the number of parameters for a standard linear classifier is determined by multiplication of the size of the high-dimensional feature vector and the number of target classes. Compact bilinear pooling (Gao et al., 2016) reduces the quadratic expansion of dimensionality by two orders of magnitude, retaining the performance of the full bilinear pooling. This approximation uses sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013), which utilizes an useful property that Ψ(x⊗ y, h, s) = Ψ(x, h, s) ∗Ψ(y, h, s), which means the projection of outer product of two vectors is the convolution of two projected vectors. Here, Ψ is the proposed projection function, and, h and s are randomly sampled parameters by the algorithm. Nevertheless, compact bilinear pooling embraces two shortcomings. One comes from the sampling approach. Compact bilinear pooling relies on a favorable property, E[〈Ψ(x, h, s),Ψ(y, h, s)〉] = 〈x, y〉, which provides a basis to use projected features instead of original features. Yet, calculating the exact expectation is computationally intractable, so, the random parameters, h and s are fixed during training and evaluation. This practical choice leads to the second. The projected dimension of compact bilinear pooling should be large enough to minimize the bias from the fixed parameters. Practical choices are 10K and 16K for 512 and 4096-dimensional inputs, respectively (Gao et al., 2016; Fukui et al., 2016). Though, these compacted dimensions are reduced ones by two orders of magnitude compared with full bilinear pooling, such high-dimensional features could be a bottleneck for computationally complex models. We propose low-rank bilinear pooling using Hadamard product (element-wise multiplication), which is commonly used in various scientific computing frameworks as one of tensor operations. The proposed method factors a three-dimensional weight tensor for bilinear pooling into three twodimensional weight matrices, which enforces the rank of the weight tensor to be low-rank. As a result, two input feature vectors linearly projected by two weight matrices, respectively, are computed by Hadamard product, then, followed by a linear projection using the third weight matrix. For example, the projected vector z is represented by WTz (W T xx ◦WTyy), where ◦ denotes Hadamard product. We also explore to add non-linearity using non-linear activation functions into the low-rank bilinear pooling, and shortcut connections inspired by deep residual learning (He et al., 2016). Then, we show that it becomes a simple baseline model (Antol et al., 2015) or one-learning block of Multimodal Residual Networks (Kim et al., 2016b) as a low-rank bilinear model, yet, this interpretation has not be done. Our contributions are as follows: First, we propose low-rank bilinear pooling to approximate full bilinear pooling to substitute compact bilinear pooling. Second, Multimodal Low-rank Bilinear Attention Networks (MLB) having an efficient attention mechanism using low-rank bilinear pooling is proposed for visual question-answering tasks. MLB achieves a new state-of-the-art performance, and has a better parsimonious property. Finally, ablation studies to explore alternative choices, e.g. network depth, non-linear functions, and shortcut connections, are conducted. 2 LOW-RANK BILINEAR MODEL Bilinear models use a quadratic expansion of linear transformation considering every pair of features. fi = N∑ j=1 M∑ k=1 wijkxjyk + bi = x TWiy + bi (1) where x and y are input vectors, Wi ∈ RN×M is a weight matrix for the output fi, and bi is a bias for the output fi. Notice that the number of parameters is L× (N ×M + 1) including a bias vector b, where L is the number of output features. Pirsiavash et al. (2009) suggest a low-rank bilinear method to reduce the rank of the weight matrix Wi to have less number of parameters for regularization. They rewrite the weight matrix as Wi = UiV T i where Ui ∈ RN×d and Vi ∈ RM×d, which imposes a restriction on the rank of Wi to be at most d ≤ min(N,M). Based on this idea, fi can be rewritten as follows: fi = x TWiy + bi = x TUiV T i y + bi = 1 T (UTi x ◦VTi y) + bi (2) where 1 ∈ Rd denotes a column vector of ones, and ◦ denotes Hadamard product. Still, we need two third-order tensors, U and V, for a feature vector f , whose elements are {fi}. To reduce the order of the weight tensors by one, we replace 1 with P ∈ Rd×c and bi with b ∈ Rc, then, redefine as U ∈ RN×d and V ∈ RM×d to get a projected feature vector f ∈ Rc. Then, we get: f = PT (UTx ◦VTy) + b (3) where d and c are hyperparameters to decide the dimension of joint embeddings and the output dimension of low-rank bilinear models, respectively. 3 LOW-RANK BILINEAR POOLING A low-rank bilinear model in Equation 3 can be implemented using two linear mappings without biases for embedding two input vectors, Hadamard product to learn joint representations in a multiplicative way, and a linear mapping with a bias to project the joint representations into an output vector for a given output dimension. Then, we use this structure as a pooling method for deep neural networks. Now, we discuss possible variations of low-rank bilinear pooling based on this model inspired by studies of neural networks. 3.1 FULL MODEL In Equation 3, linear projections, U and V , can have their own bias vectors. As a result, linear models for each input vectors, x and y, are integrated in an additive form, called as full model for linear regression in statistics: f = PT ( (UTx + bx) ◦ (VTy + by) ) + b = PT (UTx ◦VTy + U′Tx + V′Ty) + b′. (4) Here, U′T = diag(by) ·UT , V′T = diag(bx) ·VT , and b′ = b + PT (bx ◦ by). 3.2 NONLINEAR ACTIVATION Applying non-linear activation functions may help to increase representative capacity of model. The first candidate is to apply non-linear activation functions right after linear mappings for input vectors. f = PT ( σ(UTx) ◦ σ(VTy) ) + b (5) where σ denotes an arbitrary non-linear activation function, which maps any real values into a finite interval, e.g. sigmoid or tanh. If two inputs come from different modalities, statistics of two inputs may be quite different from each other, which may result an interference. Since the gradient with respect to each input is directly dependent on the other input in Hadamard product of two inputs. Additional applying an activation function after the Hadamard product is not appropriate, since activation functions doubly appear in calculating gradients. However, applying the activation function only after the Hadamard product would be alternative choice (We explore this option in Section 5) as follows: f = PTσ ( UTx ◦VTy ) + b. (6) Note that using the activation function in low-rank bilinear pooling can be found in an implementation of simple baseline for the VQA dataset (Antol et al., 2015) without an interpretation of low-rank bilinear pooling. However, notably, Wu et al. (2016c) studied learning behavior of multiplicative integration in RNNs with discussions and empirical evidences. 3.3 SHORTCUT CONNECTION When we apply two previous techniques, full model and non-linear activation, linear models of two inputs are nested by the non-linear activation functions. To avoid this unfortunate situation, we add shortcut connections as explored in residual learning (He et al., 2016). f = PT ( σ(UTx) ◦ σ(VTy) ) + hx(x) + hy(y) + b (7) where hx and hy are shortcut mappings. For linear projection, the shortcut mappings are linear mappings. Notice that this formulation is a generalized form of the one-block layered MRN (Kim et al., 2016b). Though, the shortcut connections are not used in our proposed model, as explained in Section 6. 4 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS In this section, we apply low-rank bilinear pooling to propose an efficient attention mechanism for visual question-answering tasks, based on the interpretation of previous section. We assumed that inputs are a question embedding vector q and a set of visual feature vectors F over S × S lattice space. 4.1 LOW-RANK BILINEAR POOLING IN ATTENTION MECHANISM Attention mechanism uses an attention probability distribution α over S × S lattice space. Here, using low-rank bilinear pooling, α is defined as α = softmax ( PTα ( σ(UTqq · 1T ) ◦ σ(VTFFT ) )) (8) where α ∈ RG×S2 , Pα ∈ Rd×G, σ is a hyperbolic tangent function, Uq ∈ RN×d, q ∈ RN , 1 ∈ RS2 , VF ∈ RM×d, and F ∈ RS 2×M . If G > 1, multiple glimpses are explicitly expressed as in Fukui et al. (2016), conceptually similar to Jaderberg et al. (2015). And, the softmax function applies to each row vector of α. The bias terms are omitted for simplicity. 4.2 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS Attended visual feature v̂ is a linear combination of Fi with coefficients αg,i. Each attention probability distribution αg is for a glimpse g. For G > 1, v̂ is the concatenation of resulting vectors v̂g as v̂ = Gn g=1 S2∑ s=1 αg,sFs (9) where f denotes concatenation of vectors. The posterior probability distribution is an output of a softmax function, whose input is the result of another low-rank bilinear pooling of q and v̂ as p(a|q,F; Θ) = softmax ( PTo ( σ(WTqq) ◦ σ(VTv̂ v̂) )) (10) â = arg max a∈Ω p(a|q,F; Θ) (11) where â denotes a predicted answer, Ω is a set of candidate answers and Θ is an aggregation of entire model parameters. 5 EXPERIMENTS In this section, we conduct six experiments to select the proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB). Each experiment controls other factors except one factor to assess the effect on accuracies. Based on MRN (Kim et al., 2016b), we start our assessments with an initial option of G = 1 and shortcut connections of MRN, called as Multimodal Attention Residual Networks (MARN). Notice that we use one embeddings for each visual feature for better performance, based on our preliminary experiment (not shown). We attribute this choice to the attention mechanism for visual features, which provides more capacity to learn visual features. We use the same hyper-parameters of MRN (Kim et al., 2016b), without any explicit mention of this. The VQA dataset (Antol et al., 2015) is used as a primary dataset, and, for data augmentation, question-answering annotations of Visual Genome (Krishna et al., 2016) are used. Validation is performed on the VQA test-dev split, and model comparison is based on the results of the VQA test-standard split. For the comprehensive reviews of VQA tasks, please refer to Wu et al. (2016a) and Kafle & Kanan (2016a). The details about preprocessing, question and vision embedding, and hyperparameters used in our experiments are described in Appendix A. The source code for the experiments is available in Github repository1. Number of Learning Blocks Kim et al. (2016b) argue that three-block layered MRN shows the best performance among one to four-block layered models, taking advantage of residual learning. However, we speculate that an introduction of attention mechanism makes deep networks hard to optimize. Therefore, we explore the number of learning blocks of MARN, which have an attention mechanism using low-rank bilinear pooling. Number of Glimpses Fukui et al. (2016) show that the attention mechanism of two glimpses was an optimal choice. In a similar way, we assess one, two, and four-glimpse models. 1https://github.com/jnhwkim/MulLowBiVQA Non-Linearity We assess three options applying non-linearity on low-rank bilinear pooling, vanilla, before Hadamard product as in Equation 5, and after Hadamard product as in Equation 6. Answer Sampling VQA (Antol et al., 2015) dataset has ten answers from unique persons for each question, while Visual Genome (Krishna et al., 2016) dataset has a single answer for each question. Since difficult or ambiguous questions may have divided answers, the probabilistic sampling from the distribution of answers can be utilized to optimize for the multiple answers. An instance 2 can be found in Fukui et al. (2016). We simplify the procedure as follows: p(a1) = { |a1|/Σi|ai|, if |a1| ≥ 3 0, otherwise (12) p(a0) = 1− p(a1) (13) where |ai| denotes the number of unique answer ai in a set of multiple answers, a0 denotes a mode, which is the most frequent answer, and a1 denotes the secondly most frequent answer. We define the divided answers as having at least three answers which are the secondly frequent one, for the evaluation metric of VQA (Antol et al., 2015), accuracy(ak) = min (|ak|/3, 1) . (14) 2https://github.com/akirafukui/vqa-mcb/blob/5fea8/train/multi_att_2_ glove/vqa_data_provider_layer.py#L130 The rate of the divided answers is approximately 16.40%, and only 0.23% of questions have more than two divided answers in VQA dataset. We assume that it eases the difficulty of convergence without severe degradation of performance. Shortcut Connection The contribution of shortcut connections for residual learning is explored based on the observation of the competitive performance of single-block layered model. Since the usefulness of shortcut connections is linked to the network depth (He et al., 2016). Data Augmentation The data augmentation with Visual Genome (Krishna et al., 2016) question answer annotations is explored. Visual Genome (Krishna et al., 2016) originally provides 1.7 Million visual question answer annotations. After aligning to VQA, the valid number of question-answering pairs for training is 837,298, which is for distinct 99,280 images. 6 RESULTS The six experiments are conducted sequentially. Each experiment determines experimental variables one by one. Refer to Table 1, which has six sectors divided by mid-rules. 6.1 SIX EXPERIMENT RESULTS Number of Learning Blocks Though, MRN (Kim et al., 2016b) has the three-block layered architecture, MARN shows the best performance with two-block layered models (63.92%). For the multiple glimpse models in the next experiment, we choose one-block layered model for its simplicity to extend, and competitive performance (63.79%). Number of Glimpses Compared with the results of Fukui et al. (2016), four-glimpse MARN (64.61%) is better than other comparative models. However, for a parsimonious choice, two-glimpse MARN (64.53%) is chosen for later experiments. We speculate that multiple glimpses are one of key factors for the competitive performance of MCB (Fukui et al., 2016), based on a large margin in accuracy, compared with one-glimpse MARN (63.79%). Non-Linearity The results confirm that activation functions are useful to improve performances. Surprisingly, there is no empirical difference between two options, before-Hadamard product and after-Hadamard product. This result may build a bridge to relate with studies on multiplicative integration with recurrent neural networks (Wu et al., 2016c). Answer Sampling Sampled answers (64.80%) result better performance than mode answers (64.53%). It confirms that the distribution of answers from annotators can be used to improve the performance. However, the number of multiple answers is usually limited due to the cost of data collection. Shortcut Connection Though, MRN (Kim et al., 2016b) effectively uses shortcut connections to improve model performance, one-block layered MARN shows better performance without the shortcut connection. In other words, the residual learning is not used in our proposed model, MLB. It seems that there is a trade-off between introducing attention mechanism and residual learning. We leave a careful study on this trade-off for future work. Data Augmentation Data augmentation using Visual Genome (Krishna et al., 2016) question answer annotations significantly improves the performance by 0.76% in accuracy for VQA test-dev split. Especially, the accuracy of others (ETC)-type answers is notably improved from the data augmentation. 6.2 COMPARISON WITH STATE-OF-THE-ART The comparison with other single models on VQA test-standard is shown in Table 2. The overall accuracy of our model is approximately 1.9% above the next best model (Noh & Han, 2016) on the Open-Ended task of VQA. The major improvements are from yes-or-no (Y/N) and others (ETC)type answers. In Table 3, we also report the accuracy of our ensemble model to compare with other ensemble models on VQA test-standard, which won 1st to 5th places in VQA Challenge 20163. We beat the previous state-of-the-art with a margin of 0.42%. 7 RELATED WORKS MRN (Kim et al., 2016b) proposes multimodal residual learning with Hadamard product of low-rank bilinear pooling. However, their utilization of low-rank bilinear pooling is limited to joint residual mapping function for multimodal residual learning. Higher-order Boltzmann Machines (Memisevic & Hinton, 2007; 2010) use Hadamard product to capture the interactions of input, output, and hidden representations for energy function. Wu et al. (2016c) propose the recurrent neural networks using Hadamard product to integrate multiplicative interactions among hidden representations in the model. For details of these related works, please refer to Appendix D. 3http://visualqa.org/challenge.html Yet, compact bilinear pooling or multimodal compact bilinear pooling (Gao et al., 2016; Fukui et al., 2016) is worth to discuss and carefully compare with our method. 7.1 COMPACT BILINEAR POOLING Compact bilinear pooling (Gao et al., 2016) approximates full bilinear pooling using a samplingbased computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013): Ψ(x⊗ y, h, s) = Ψ(x, h, s) ∗Ψ(y, h, s) (15) = FFT−1(FFT(Ψ(x, h, s) ◦ FFT(Ψ(y, h, s)) (16) where ⊗ denotes outer product, ∗ denotes convolution, Ψ(v, h, s)i := ∑ j:hj=i sj · vj , FFT denotes Fast Fourier Transform, d denotes an output dimension, x, y, h, s ∈ Rn, x and y are inputs, and h and s are random variables. hi is sampled from {1, ..., d}, and si is sampled from {−1, 1}, then, both random variables are fixed for further usage. Even if the dimensions of x and y are different from each other, it can be used for multimodal learning (Fukui et al., 2016). Similarly to Equation 1, compact bilinear pooling can be described as follows: fi = x TWiy (17) whereWijk = sijkwijk if sijk is sampled from {−1, 1},wijk is sampled from {Pi1,Pi2, . . . ,Pid}, and the compact bilinear pooling is followed by a fully connected layer P ∈ R|Ω|×d. Then, this method can be formulated as a hashing trick (Weinberger et al., 2009; Chen et al., 2015) to share randomly chosen bilinear weights using d parameters for a output value, in a way that a single parameter is shared by NM/d bilinear terms in expectation, with the variance of NM(d − 1)/d2 (See Appendix B). In comparison with our method, their method approximates a three-dimensional weight tensor in bilinear pooling with a two-dimensional matrix P, which is larger than the concatenation of three two-dimensional matrices for low-rank bilinear pooling. The ratio of the number of parameters for a single output to the total number of parameters for |Ω| outputs is d/d|Ω| = 1/|Ω| (Fukui et al., 2016), vs. d(N +M + 1)/d(N +M + |Ω|) = (N +M + 1)/(N +M + |Ω|) ≈ 2/3 (ours), since our method uses a three-way factorization. Hence, more parameters are allocated to each bilinear approximation than compact bilinear pooling does, effectively managing overall parameters guided by back-propagation algorithm. MCB (Fukui et al., 2016), which uses compact bilinear pooling for multimodal tasks, needs to set the dimension of output d to 16K, to reduce the bias induced by the fixed random variables h and s. As a result, the majority of model parameters (16K × 3K = 48M) are concentrated on the last fully connected layer, which makes a fan-out structure. So, the total number of parameters of MCB is highly sensitive to the number of classes, which is approximately 69.2M for MCB+att, and 70.5M for MCB+att+GloVe. Yet, the total number of parameters of our proposed model (MLB) is 51.9M, which is more robust to the number of classes having d = 1.2K, which has a similar role in model architecture. 8 CONCLUSIONS We suggest a low-rank bilinear pooling method to replace compact bilinear pooling, which has a fan-out structure, and needs complex computations. Low-rank bilinear pooling has a flexible structure using linear mapping and Hadamard product, and a better parsimonious property, compared with compact bilinear pooling. We achieve new state-of-the-art results on the VQA dataset using a similar architecture of Fukui et al. (2016), replacing compact bilinear pooling with low-rank bilinear pooling. We believe our method could be applicable to other bilinear learning tasks. ACKNOWLEDGMENTS The authors would like to thank Patrick Emaase for helpful comments and editing. Also, we are thankful to anonymous reviewers who provided comments to improve this paper. This work was supported by NAVER LABS Corp. & NAVER Corp. and partly by the Korea government (IITP-R0126-16-1072-SW.StarLab, KEIT10044009-HRI.MESSI, KEIT-10060086-RISF, ADD-UD130070ID-BMRR). The part of computing resources used in this study was generously shared by Standigm Inc. Appendix A EXPERIMENT DETAILS A.1 PREPROCESSING We follow the preprocessing procedure of Kim et al. (2016b). Here, we remark some details of it, and changes. A.1.1 QUESTION EMBEDDING The 90.45% of questions for the 2K-most frequent answers are used. The vocabulary size of questions is 15,031. GRU (Cho et al., 2014) is used for question embedding. Based on earlier studies (Noh et al., 2016; Kim et al., 2016b), a word embedding matrix and a GRU are initialized with Skip-thought Vector pre-trained model (Kiros et al., 2015). As a result, question vectors have 2,400 dimensions. For efficient computation of variable-length questions, Kim et al. (2016a) is used for the GRU. Moreover, for regularization, Bayesian Dropout (Gal, 2015) which is implemented in Léonard et al. (2015) is applied while training. A.2 VISION EMBEDDING ResNet-152 networks (He et al., 2016) are used for feature extraction. The dimensionality of an input image is 3× 448× 448. The outputs of the last convolution layer is used, which have 2, 048× 14× 14 dimensions. A.3 HYPERPARAMETERS The hyperparameters used in MLB of Table 2 are described in Table 4. The batch size is 100, and the number of iterations is fixed to 250K. For data augmented models, a simplified early stopping is used, starting from 250K to 350K-iteration for every 25K iterations (250K, 275K, 300K, 325K, and 350K; at most five points) to avoid exhaustive submissions to VQA test-dev evaluation server. RMSProp (Tieleman & Hinton, 2012) is used for optimization. Though, the size of joint embedding size d is borrowed from Kim et al. (2016b), a grid search on d confirms this choice in our model as shown in Table 5. A.4 MODEL SCHEMA Figure 1 shows a schematic diagram of MLB, where ◦ denotes Hadamard product, and Σ denotes a linear combination of visual feature vectors using coefficients, which is the output of softmax function. If G > 1, the softmax function is applied to each row vectors of an output matrix (Equation 8), and we concatenate the resulting vectors of the G linear combinations (Equation 9). A.5 ENSEMBLE OF SEVEN MODELS The test-dev results for individual models consisting of our ensemble model is presented in Table 6. MODEL GLIMPSE ALL Y/N NUM ETC LinearLinear B UNDERSTANDING OF MULTIMODAL COMPACT BILINEAR POOLING In this section, the algorithm of multimodal compact bilinear pooling (MCB) (Gao et al., 2016; Fukui et al., 2016) is described as a kind of hashing tick (Chen et al., 2015). x ∈ Rnx and y ∈ Rny are the given inputs, Φ(x,y) ∈ Rd is the output. Random variables hx ∈ Nnx and hy ∈ Nny are uniformly sampled from {1, . . . , d}, and sx ∈ Znx and sy ∈ Zny are uniformly sampled from {−1, 1}. Then, Count Sketch projection function Ψ (Charikar et al., 2002) projects x and y to intermediate representations Ψ(x,hx, sx) ∈ Rd and Ψ(y,hy, sy) ∈ Rd, which is defined as: Ψ(v,h, s)i := ∑ j:hj=i sj · vj (18) Notice that both h and s remain as constants after initialization (Fukui et al., 2016). The probability of hxj = i and hyj = i for the given j is 1/d2. Hence, the expected number of bilinear terms in Ψ(x,hx, sx)iΨ(y,hy, sy)i is (nxny)/d2. Since, the output Φ(x,y) is a result of circular convolution of Ψ(x,hx, sx) and Ψ(y,hy, sy), the expected number of bilinear terms in Φ(x,y)i is (nxny)/d. Likewise, the probability of that a bilinear term is allocated in Φ(x,y)i is 1/d. The probability distribution of the number of bilinear terms in Φ(x,y)i follows a multinomial distribution, whose mean is (nxny)/d and variance is (nxny)(d− 1)/d2. Linear projection after the multimodal compact bilinear pooling provides weights on the bilinear terms, in a way that a shared weight is assigned to Φ(x,y)i, which has (nxny)/d bilinear terms in expectation, though each bilinear term can have a different sign induced by both sx and sy . HashedNets (Chen et al., 2015) propose a method to compress neural networks using a low-cost hashing function (Weinberger et al., 2009), which is the same function of Ψ(v,h, s). They randomly group a portion of connections in neural networks to share a single weight. We speculate that multimodal compact bilinear pooling uses the hashing tick to reduce the number of full bilinear weights with the rate of d/(nxny). However, this approximation is limited to two-way interaction, compared with three-way factorization in our method. C REPLACEMENT OF LOW-RANK BILINEAR POOLING For the explicit comparison with compact bilinear pooling, we explicitly substitute compact bilinear pooling for low-rank bilinear pooling to control everything else, which means that the rest of the model architecture is exactly the same. According to Fukui et al. (2016), we use MCB followed by Signed Square Root, L2-Normalization, Dropout (p=0.1), and linear projection from 16,000-dimension to the target dimension. Also, Dropout (p=0.3) for a question embedding vector. Note that an overall architecture for multimodal learning of both is the same. Experimental details are referenced from the implementation 4 of Fukui et al. (2016). For test-dev split, our version of MCB gets 61.48% for overall accuracy (yes/no: 82.48%, number: 37.06%, and other: 49.07%) vs. 65.08% (ours, MLB in Table 1). Additionally, if the nonlinearity in getting attention distributions is increased as the original MCB does using ReLU, we get 62.11% for overall accuracy (yes/no: 82.55%, number: 37.18%, and other: 50.30%), which is still the below of our performance 5. We do not see it as a decisive evidence of the better performance of MLB, but as a reference (the comparison of test-dev results may be also unfair.), since an optimal architecture and hyperparameters may be required for each method. 4https://github.com/akirafukui/vqa-mcb 5Our version of MCB definition can be found in https://github.com/jnhwkim/MulLowBiVQA/ blob/master/netdef/MCB.lua D RELATED WORKS D.1 MULTIMODAL RESIDUAL NETWORKS MRN (Kim et al., 2016b) is an implicit attentional model using multimodal residual learning with Hadamard product which does not have any explicit attention mechanism. F (k)(q,v) = σ(W(k)q q) ◦ σ(W (k) 2 σ(W (k) 1 v)) (19) HL(q,v) = Wq′q + L∑ l=1 WF(l)F (l)(Hl−1,v) (20) where W∗ are parameter matrices, L is the number of learning blocks, H0 = q, Wq′ = ΠLl=1W (l) q′ , and WF(l) = Π L m=l+1W (m) q′ . Notice that these equations can be generalized by Equation 7. However, an explicit attention mechanism allows the use of lower-level visual features than fully-connected layers, and, more importantly, spatially selective learning. Recent state-of-the-art methods use a variant of an explicit attention mechanism in their models (Lu et al., 2016; Noh & Han, 2016; Fukui et al., 2016). Note that shortcut connections of MRN are not used in the proposed Multimodal Low-rank Bilinear (MLB) model. Since, it does not have any performance gain due to not stacking multiple layers in MLB. We leave the study of residual learning for MLB for future work, which may leverage the excellency of bilinear models as suggested in Wu et al. (2016a). D.2 HIGHER-ORDER BOLTZMANN MACHINES A similar model can be found in a study of Higher-Order Boltzmann Machines (Memisevic & Hinton, 2007; 2010). They suggest a factoring method for the three-way energy function to capture correlations among input, output, and hidden representations. −E(y,h;x) = ∑ f (∑ i xiw x if )(∑ j yjw y jf )(∑ k hkw h kf ) + ∑ k whkhk + ∑ j wyj yj = ( xTWx ◦ yTWy ◦ hTWh ) 1+ hTwh + yTwy (21) Setting aside of bias terms, the I × J ×K parameter tensor of unfactored Higher-Order Boltzmann Machines is replaced with three matrices, Wx ∈ RI×F , Wy ∈ RJ×F , and Wh ∈ RK×F . D.3 MULTIPLICATIVE INTEGRATION WITH RECURRENT NEURAL NETWORKS Most of recurrent neural networks, including vanilla RNNs, Long Short Term Memory networks (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (Cho et al., 2014), share a common expression as follows: φ(Wx + Uh + b) (22) where φ is a non-linear function, W ∈ Rd×n, x ∈ Rn, U ∈ Rd×m, h ∈ Rm, and b ∈ Rd is a bias vector. Note that, usually, x is an input state vector and h is an hidden state vector in recurrent neural networks. Wu et al. (2016c) propose a new design to replace the additive expression with a multiplicative expression using Hadamard product as φ(Wx ◦Uh + b). (23) Moreover, a general formulation of this multiplicative integration can be described as φ(α ◦Wx ◦Uh + Wx ◦ β1 + Uh ◦ β2 + b) (24) which is reminiscent of full model in Section 3.1.
1. What is the focus of the paper regarding bilinear pooling and its approximation? 2. What are the strengths of the proposed approach in terms of experimental evaluation and novelty? 3. What are the weaknesses of the paper, particularly in terms of comparison with other works and theoretical analysis? 4. How does the reviewer assess the significance of the contribution, particularly in the context of visual question answering? 5. What additional experiments would the reviewer suggest to further support the claims made in the paper?
Review
Review This work proposes to approximate the bilinear pooling (outer product) with a formulation which uses the Hadamard Product (element-wise product). This formulation is evaluated on the visual question answering (VQA) task together with several other model variants. Strength: 1. The paper discusses how the Hadamard product can be used to approximate the full outer product. 2. The paper provides an extensive experimental evaluation of other model aspect for VQA. 3. The full model archives a slight improvement over prior state-of-the-art on the challenging and large scale VQA challenge. Weaknesses: 1. Novelty: The paper presents only a new “interpretation” of the Hadamard product which has previously been widely used for pooling, including for VQA. 2. Experimental evaluation: 2.1. An experimental direct comparison with MCB missing. Although the evaluated model is similar to Fukui et al. several other changes have been made, including question encoding (GRU vs. LSTM), normalization (tanh vs. L2 vs. none). The small difference in performance (0.44% om Table 1) could easily be attributed to these differences. 2.2. An experimental comparison to the full outer product (e.g. for a lower dimension) is missing. It remains unclear how good the proposed approximation for the full outer product is. While a comparison to MCB is presented this seems insufficient as MCB is a very different model. 2.3. One of the most important hyper parameters for the Hadamard Product seems to be the dimension of the lower dimensional embedding d. What effect does changing this have? 2.4. Comparison with other pooling strategies, e.g. elementwise sum instead of elementwise product. 3. No theoretical analysis or properties of the approximation are presented. 4. The paper seems to be general at the beginning, but the claim of the benefit of the Hadamard product is only shown experimentally on the VQA dataset. 5. Related work: The comparison to the related works in the appendix should at least be mentioned in the main paper, even if the details are the supplemental. Minor - It is not clear why the Lu et al, 2015 is cited rather than the published paper from Antol et al. - Sect 2, first sentence: “every pairs” -> “every pair” Summary: While the paper provides a new best performance and an interesting interpretation of Hadamard product, to be a strong paper, either a more theoretical analysis of the properties of this approximation is required or a corresponding experimental evaluation. It is a bit unfortunate that most of the experimental evaluation is not about the main claim of the paper (the Hadamard product) but of unrelated aspects which are important to achieve high performance in the VQA challenge. To be more convincing I would like to see the following experiments - Comparison with Outer product in the identical model - Comparison with MCB in the identical model - Comparison with elementwise sum instead of elementwise product - One of the most important hyper parameters for the Hadamard Product seems to be the dimension of the lower dimensional embedding d. What effect does changing this have?
ICLR
Title Hadamard Product for Low-rank Bilinear Pooling Abstract Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. N/A Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. 1 INTRODUCTION Bilinear models (Tenenbaum & Freeman, 2000) provide richer representations than linear models. To exploit this advantage, fully-connected layers in neural networks can be replaced with bilinear pooling. The outer product of two vectors (or Kroneker product for matrices) is involved in bilinear pooling, as a result of this, all pairwise interactions among given features are considered. Recently, a successful application of this technique is used for fine-grained visual recognition (Lin et al., 2015). However, bilinear pooling produces a high-dimensional feature of quadratic expansion, which may constrain a model structure and computational resources. For example, an outer product of two feature vectors, both of which have 1K-dimensionality, produces a million-dimensional feature vector. Therefore, for classification problems, the choice of the number of target classes is severely constrained, because the number of parameters for a standard linear classifier is determined by multiplication of the size of the high-dimensional feature vector and the number of target classes. Compact bilinear pooling (Gao et al., 2016) reduces the quadratic expansion of dimensionality by two orders of magnitude, retaining the performance of the full bilinear pooling. This approximation uses sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013), which utilizes an useful property that Ψ(x⊗ y, h, s) = Ψ(x, h, s) ∗Ψ(y, h, s), which means the projection of outer product of two vectors is the convolution of two projected vectors. Here, Ψ is the proposed projection function, and, h and s are randomly sampled parameters by the algorithm. Nevertheless, compact bilinear pooling embraces two shortcomings. One comes from the sampling approach. Compact bilinear pooling relies on a favorable property, E[〈Ψ(x, h, s),Ψ(y, h, s)〉] = 〈x, y〉, which provides a basis to use projected features instead of original features. Yet, calculating the exact expectation is computationally intractable, so, the random parameters, h and s are fixed during training and evaluation. This practical choice leads to the second. The projected dimension of compact bilinear pooling should be large enough to minimize the bias from the fixed parameters. Practical choices are 10K and 16K for 512 and 4096-dimensional inputs, respectively (Gao et al., 2016; Fukui et al., 2016). Though, these compacted dimensions are reduced ones by two orders of magnitude compared with full bilinear pooling, such high-dimensional features could be a bottleneck for computationally complex models. We propose low-rank bilinear pooling using Hadamard product (element-wise multiplication), which is commonly used in various scientific computing frameworks as one of tensor operations. The proposed method factors a three-dimensional weight tensor for bilinear pooling into three twodimensional weight matrices, which enforces the rank of the weight tensor to be low-rank. As a result, two input feature vectors linearly projected by two weight matrices, respectively, are computed by Hadamard product, then, followed by a linear projection using the third weight matrix. For example, the projected vector z is represented by WTz (W T xx ◦WTyy), where ◦ denotes Hadamard product. We also explore to add non-linearity using non-linear activation functions into the low-rank bilinear pooling, and shortcut connections inspired by deep residual learning (He et al., 2016). Then, we show that it becomes a simple baseline model (Antol et al., 2015) or one-learning block of Multimodal Residual Networks (Kim et al., 2016b) as a low-rank bilinear model, yet, this interpretation has not be done. Our contributions are as follows: First, we propose low-rank bilinear pooling to approximate full bilinear pooling to substitute compact bilinear pooling. Second, Multimodal Low-rank Bilinear Attention Networks (MLB) having an efficient attention mechanism using low-rank bilinear pooling is proposed for visual question-answering tasks. MLB achieves a new state-of-the-art performance, and has a better parsimonious property. Finally, ablation studies to explore alternative choices, e.g. network depth, non-linear functions, and shortcut connections, are conducted. 2 LOW-RANK BILINEAR MODEL Bilinear models use a quadratic expansion of linear transformation considering every pair of features. fi = N∑ j=1 M∑ k=1 wijkxjyk + bi = x TWiy + bi (1) where x and y are input vectors, Wi ∈ RN×M is a weight matrix for the output fi, and bi is a bias for the output fi. Notice that the number of parameters is L× (N ×M + 1) including a bias vector b, where L is the number of output features. Pirsiavash et al. (2009) suggest a low-rank bilinear method to reduce the rank of the weight matrix Wi to have less number of parameters for regularization. They rewrite the weight matrix as Wi = UiV T i where Ui ∈ RN×d and Vi ∈ RM×d, which imposes a restriction on the rank of Wi to be at most d ≤ min(N,M). Based on this idea, fi can be rewritten as follows: fi = x TWiy + bi = x TUiV T i y + bi = 1 T (UTi x ◦VTi y) + bi (2) where 1 ∈ Rd denotes a column vector of ones, and ◦ denotes Hadamard product. Still, we need two third-order tensors, U and V, for a feature vector f , whose elements are {fi}. To reduce the order of the weight tensors by one, we replace 1 with P ∈ Rd×c and bi with b ∈ Rc, then, redefine as U ∈ RN×d and V ∈ RM×d to get a projected feature vector f ∈ Rc. Then, we get: f = PT (UTx ◦VTy) + b (3) where d and c are hyperparameters to decide the dimension of joint embeddings and the output dimension of low-rank bilinear models, respectively. 3 LOW-RANK BILINEAR POOLING A low-rank bilinear model in Equation 3 can be implemented using two linear mappings without biases for embedding two input vectors, Hadamard product to learn joint representations in a multiplicative way, and a linear mapping with a bias to project the joint representations into an output vector for a given output dimension. Then, we use this structure as a pooling method for deep neural networks. Now, we discuss possible variations of low-rank bilinear pooling based on this model inspired by studies of neural networks. 3.1 FULL MODEL In Equation 3, linear projections, U and V , can have their own bias vectors. As a result, linear models for each input vectors, x and y, are integrated in an additive form, called as full model for linear regression in statistics: f = PT ( (UTx + bx) ◦ (VTy + by) ) + b = PT (UTx ◦VTy + U′Tx + V′Ty) + b′. (4) Here, U′T = diag(by) ·UT , V′T = diag(bx) ·VT , and b′ = b + PT (bx ◦ by). 3.2 NONLINEAR ACTIVATION Applying non-linear activation functions may help to increase representative capacity of model. The first candidate is to apply non-linear activation functions right after linear mappings for input vectors. f = PT ( σ(UTx) ◦ σ(VTy) ) + b (5) where σ denotes an arbitrary non-linear activation function, which maps any real values into a finite interval, e.g. sigmoid or tanh. If two inputs come from different modalities, statistics of two inputs may be quite different from each other, which may result an interference. Since the gradient with respect to each input is directly dependent on the other input in Hadamard product of two inputs. Additional applying an activation function after the Hadamard product is not appropriate, since activation functions doubly appear in calculating gradients. However, applying the activation function only after the Hadamard product would be alternative choice (We explore this option in Section 5) as follows: f = PTσ ( UTx ◦VTy ) + b. (6) Note that using the activation function in low-rank bilinear pooling can be found in an implementation of simple baseline for the VQA dataset (Antol et al., 2015) without an interpretation of low-rank bilinear pooling. However, notably, Wu et al. (2016c) studied learning behavior of multiplicative integration in RNNs with discussions and empirical evidences. 3.3 SHORTCUT CONNECTION When we apply two previous techniques, full model and non-linear activation, linear models of two inputs are nested by the non-linear activation functions. To avoid this unfortunate situation, we add shortcut connections as explored in residual learning (He et al., 2016). f = PT ( σ(UTx) ◦ σ(VTy) ) + hx(x) + hy(y) + b (7) where hx and hy are shortcut mappings. For linear projection, the shortcut mappings are linear mappings. Notice that this formulation is a generalized form of the one-block layered MRN (Kim et al., 2016b). Though, the shortcut connections are not used in our proposed model, as explained in Section 6. 4 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS In this section, we apply low-rank bilinear pooling to propose an efficient attention mechanism for visual question-answering tasks, based on the interpretation of previous section. We assumed that inputs are a question embedding vector q and a set of visual feature vectors F over S × S lattice space. 4.1 LOW-RANK BILINEAR POOLING IN ATTENTION MECHANISM Attention mechanism uses an attention probability distribution α over S × S lattice space. Here, using low-rank bilinear pooling, α is defined as α = softmax ( PTα ( σ(UTqq · 1T ) ◦ σ(VTFFT ) )) (8) where α ∈ RG×S2 , Pα ∈ Rd×G, σ is a hyperbolic tangent function, Uq ∈ RN×d, q ∈ RN , 1 ∈ RS2 , VF ∈ RM×d, and F ∈ RS 2×M . If G > 1, multiple glimpses are explicitly expressed as in Fukui et al. (2016), conceptually similar to Jaderberg et al. (2015). And, the softmax function applies to each row vector of α. The bias terms are omitted for simplicity. 4.2 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS Attended visual feature v̂ is a linear combination of Fi with coefficients αg,i. Each attention probability distribution αg is for a glimpse g. For G > 1, v̂ is the concatenation of resulting vectors v̂g as v̂ = Gn g=1 S2∑ s=1 αg,sFs (9) where f denotes concatenation of vectors. The posterior probability distribution is an output of a softmax function, whose input is the result of another low-rank bilinear pooling of q and v̂ as p(a|q,F; Θ) = softmax ( PTo ( σ(WTqq) ◦ σ(VTv̂ v̂) )) (10) â = arg max a∈Ω p(a|q,F; Θ) (11) where â denotes a predicted answer, Ω is a set of candidate answers and Θ is an aggregation of entire model parameters. 5 EXPERIMENTS In this section, we conduct six experiments to select the proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB). Each experiment controls other factors except one factor to assess the effect on accuracies. Based on MRN (Kim et al., 2016b), we start our assessments with an initial option of G = 1 and shortcut connections of MRN, called as Multimodal Attention Residual Networks (MARN). Notice that we use one embeddings for each visual feature for better performance, based on our preliminary experiment (not shown). We attribute this choice to the attention mechanism for visual features, which provides more capacity to learn visual features. We use the same hyper-parameters of MRN (Kim et al., 2016b), without any explicit mention of this. The VQA dataset (Antol et al., 2015) is used as a primary dataset, and, for data augmentation, question-answering annotations of Visual Genome (Krishna et al., 2016) are used. Validation is performed on the VQA test-dev split, and model comparison is based on the results of the VQA test-standard split. For the comprehensive reviews of VQA tasks, please refer to Wu et al. (2016a) and Kafle & Kanan (2016a). The details about preprocessing, question and vision embedding, and hyperparameters used in our experiments are described in Appendix A. The source code for the experiments is available in Github repository1. Number of Learning Blocks Kim et al. (2016b) argue that three-block layered MRN shows the best performance among one to four-block layered models, taking advantage of residual learning. However, we speculate that an introduction of attention mechanism makes deep networks hard to optimize. Therefore, we explore the number of learning blocks of MARN, which have an attention mechanism using low-rank bilinear pooling. Number of Glimpses Fukui et al. (2016) show that the attention mechanism of two glimpses was an optimal choice. In a similar way, we assess one, two, and four-glimpse models. 1https://github.com/jnhwkim/MulLowBiVQA Non-Linearity We assess three options applying non-linearity on low-rank bilinear pooling, vanilla, before Hadamard product as in Equation 5, and after Hadamard product as in Equation 6. Answer Sampling VQA (Antol et al., 2015) dataset has ten answers from unique persons for each question, while Visual Genome (Krishna et al., 2016) dataset has a single answer for each question. Since difficult or ambiguous questions may have divided answers, the probabilistic sampling from the distribution of answers can be utilized to optimize for the multiple answers. An instance 2 can be found in Fukui et al. (2016). We simplify the procedure as follows: p(a1) = { |a1|/Σi|ai|, if |a1| ≥ 3 0, otherwise (12) p(a0) = 1− p(a1) (13) where |ai| denotes the number of unique answer ai in a set of multiple answers, a0 denotes a mode, which is the most frequent answer, and a1 denotes the secondly most frequent answer. We define the divided answers as having at least three answers which are the secondly frequent one, for the evaluation metric of VQA (Antol et al., 2015), accuracy(ak) = min (|ak|/3, 1) . (14) 2https://github.com/akirafukui/vqa-mcb/blob/5fea8/train/multi_att_2_ glove/vqa_data_provider_layer.py#L130 The rate of the divided answers is approximately 16.40%, and only 0.23% of questions have more than two divided answers in VQA dataset. We assume that it eases the difficulty of convergence without severe degradation of performance. Shortcut Connection The contribution of shortcut connections for residual learning is explored based on the observation of the competitive performance of single-block layered model. Since the usefulness of shortcut connections is linked to the network depth (He et al., 2016). Data Augmentation The data augmentation with Visual Genome (Krishna et al., 2016) question answer annotations is explored. Visual Genome (Krishna et al., 2016) originally provides 1.7 Million visual question answer annotations. After aligning to VQA, the valid number of question-answering pairs for training is 837,298, which is for distinct 99,280 images. 6 RESULTS The six experiments are conducted sequentially. Each experiment determines experimental variables one by one. Refer to Table 1, which has six sectors divided by mid-rules. 6.1 SIX EXPERIMENT RESULTS Number of Learning Blocks Though, MRN (Kim et al., 2016b) has the three-block layered architecture, MARN shows the best performance with two-block layered models (63.92%). For the multiple glimpse models in the next experiment, we choose one-block layered model for its simplicity to extend, and competitive performance (63.79%). Number of Glimpses Compared with the results of Fukui et al. (2016), four-glimpse MARN (64.61%) is better than other comparative models. However, for a parsimonious choice, two-glimpse MARN (64.53%) is chosen for later experiments. We speculate that multiple glimpses are one of key factors for the competitive performance of MCB (Fukui et al., 2016), based on a large margin in accuracy, compared with one-glimpse MARN (63.79%). Non-Linearity The results confirm that activation functions are useful to improve performances. Surprisingly, there is no empirical difference between two options, before-Hadamard product and after-Hadamard product. This result may build a bridge to relate with studies on multiplicative integration with recurrent neural networks (Wu et al., 2016c). Answer Sampling Sampled answers (64.80%) result better performance than mode answers (64.53%). It confirms that the distribution of answers from annotators can be used to improve the performance. However, the number of multiple answers is usually limited due to the cost of data collection. Shortcut Connection Though, MRN (Kim et al., 2016b) effectively uses shortcut connections to improve model performance, one-block layered MARN shows better performance without the shortcut connection. In other words, the residual learning is not used in our proposed model, MLB. It seems that there is a trade-off between introducing attention mechanism and residual learning. We leave a careful study on this trade-off for future work. Data Augmentation Data augmentation using Visual Genome (Krishna et al., 2016) question answer annotations significantly improves the performance by 0.76% in accuracy for VQA test-dev split. Especially, the accuracy of others (ETC)-type answers is notably improved from the data augmentation. 6.2 COMPARISON WITH STATE-OF-THE-ART The comparison with other single models on VQA test-standard is shown in Table 2. The overall accuracy of our model is approximately 1.9% above the next best model (Noh & Han, 2016) on the Open-Ended task of VQA. The major improvements are from yes-or-no (Y/N) and others (ETC)type answers. In Table 3, we also report the accuracy of our ensemble model to compare with other ensemble models on VQA test-standard, which won 1st to 5th places in VQA Challenge 20163. We beat the previous state-of-the-art with a margin of 0.42%. 7 RELATED WORKS MRN (Kim et al., 2016b) proposes multimodal residual learning with Hadamard product of low-rank bilinear pooling. However, their utilization of low-rank bilinear pooling is limited to joint residual mapping function for multimodal residual learning. Higher-order Boltzmann Machines (Memisevic & Hinton, 2007; 2010) use Hadamard product to capture the interactions of input, output, and hidden representations for energy function. Wu et al. (2016c) propose the recurrent neural networks using Hadamard product to integrate multiplicative interactions among hidden representations in the model. For details of these related works, please refer to Appendix D. 3http://visualqa.org/challenge.html Yet, compact bilinear pooling or multimodal compact bilinear pooling (Gao et al., 2016; Fukui et al., 2016) is worth to discuss and carefully compare with our method. 7.1 COMPACT BILINEAR POOLING Compact bilinear pooling (Gao et al., 2016) approximates full bilinear pooling using a samplingbased computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013): Ψ(x⊗ y, h, s) = Ψ(x, h, s) ∗Ψ(y, h, s) (15) = FFT−1(FFT(Ψ(x, h, s) ◦ FFT(Ψ(y, h, s)) (16) where ⊗ denotes outer product, ∗ denotes convolution, Ψ(v, h, s)i := ∑ j:hj=i sj · vj , FFT denotes Fast Fourier Transform, d denotes an output dimension, x, y, h, s ∈ Rn, x and y are inputs, and h and s are random variables. hi is sampled from {1, ..., d}, and si is sampled from {−1, 1}, then, both random variables are fixed for further usage. Even if the dimensions of x and y are different from each other, it can be used for multimodal learning (Fukui et al., 2016). Similarly to Equation 1, compact bilinear pooling can be described as follows: fi = x TWiy (17) whereWijk = sijkwijk if sijk is sampled from {−1, 1},wijk is sampled from {Pi1,Pi2, . . . ,Pid}, and the compact bilinear pooling is followed by a fully connected layer P ∈ R|Ω|×d. Then, this method can be formulated as a hashing trick (Weinberger et al., 2009; Chen et al., 2015) to share randomly chosen bilinear weights using d parameters for a output value, in a way that a single parameter is shared by NM/d bilinear terms in expectation, with the variance of NM(d − 1)/d2 (See Appendix B). In comparison with our method, their method approximates a three-dimensional weight tensor in bilinear pooling with a two-dimensional matrix P, which is larger than the concatenation of three two-dimensional matrices for low-rank bilinear pooling. The ratio of the number of parameters for a single output to the total number of parameters for |Ω| outputs is d/d|Ω| = 1/|Ω| (Fukui et al., 2016), vs. d(N +M + 1)/d(N +M + |Ω|) = (N +M + 1)/(N +M + |Ω|) ≈ 2/3 (ours), since our method uses a three-way factorization. Hence, more parameters are allocated to each bilinear approximation than compact bilinear pooling does, effectively managing overall parameters guided by back-propagation algorithm. MCB (Fukui et al., 2016), which uses compact bilinear pooling for multimodal tasks, needs to set the dimension of output d to 16K, to reduce the bias induced by the fixed random variables h and s. As a result, the majority of model parameters (16K × 3K = 48M) are concentrated on the last fully connected layer, which makes a fan-out structure. So, the total number of parameters of MCB is highly sensitive to the number of classes, which is approximately 69.2M for MCB+att, and 70.5M for MCB+att+GloVe. Yet, the total number of parameters of our proposed model (MLB) is 51.9M, which is more robust to the number of classes having d = 1.2K, which has a similar role in model architecture. 8 CONCLUSIONS We suggest a low-rank bilinear pooling method to replace compact bilinear pooling, which has a fan-out structure, and needs complex computations. Low-rank bilinear pooling has a flexible structure using linear mapping and Hadamard product, and a better parsimonious property, compared with compact bilinear pooling. We achieve new state-of-the-art results on the VQA dataset using a similar architecture of Fukui et al. (2016), replacing compact bilinear pooling with low-rank bilinear pooling. We believe our method could be applicable to other bilinear learning tasks. ACKNOWLEDGMENTS The authors would like to thank Patrick Emaase for helpful comments and editing. Also, we are thankful to anonymous reviewers who provided comments to improve this paper. This work was supported by NAVER LABS Corp. & NAVER Corp. and partly by the Korea government (IITP-R0126-16-1072-SW.StarLab, KEIT10044009-HRI.MESSI, KEIT-10060086-RISF, ADD-UD130070ID-BMRR). The part of computing resources used in this study was generously shared by Standigm Inc. Appendix A EXPERIMENT DETAILS A.1 PREPROCESSING We follow the preprocessing procedure of Kim et al. (2016b). Here, we remark some details of it, and changes. A.1.1 QUESTION EMBEDDING The 90.45% of questions for the 2K-most frequent answers are used. The vocabulary size of questions is 15,031. GRU (Cho et al., 2014) is used for question embedding. Based on earlier studies (Noh et al., 2016; Kim et al., 2016b), a word embedding matrix and a GRU are initialized with Skip-thought Vector pre-trained model (Kiros et al., 2015). As a result, question vectors have 2,400 dimensions. For efficient computation of variable-length questions, Kim et al. (2016a) is used for the GRU. Moreover, for regularization, Bayesian Dropout (Gal, 2015) which is implemented in Léonard et al. (2015) is applied while training. A.2 VISION EMBEDDING ResNet-152 networks (He et al., 2016) are used for feature extraction. The dimensionality of an input image is 3× 448× 448. The outputs of the last convolution layer is used, which have 2, 048× 14× 14 dimensions. A.3 HYPERPARAMETERS The hyperparameters used in MLB of Table 2 are described in Table 4. The batch size is 100, and the number of iterations is fixed to 250K. For data augmented models, a simplified early stopping is used, starting from 250K to 350K-iteration for every 25K iterations (250K, 275K, 300K, 325K, and 350K; at most five points) to avoid exhaustive submissions to VQA test-dev evaluation server. RMSProp (Tieleman & Hinton, 2012) is used for optimization. Though, the size of joint embedding size d is borrowed from Kim et al. (2016b), a grid search on d confirms this choice in our model as shown in Table 5. A.4 MODEL SCHEMA Figure 1 shows a schematic diagram of MLB, where ◦ denotes Hadamard product, and Σ denotes a linear combination of visual feature vectors using coefficients, which is the output of softmax function. If G > 1, the softmax function is applied to each row vectors of an output matrix (Equation 8), and we concatenate the resulting vectors of the G linear combinations (Equation 9). A.5 ENSEMBLE OF SEVEN MODELS The test-dev results for individual models consisting of our ensemble model is presented in Table 6. MODEL GLIMPSE ALL Y/N NUM ETC LinearLinear B UNDERSTANDING OF MULTIMODAL COMPACT BILINEAR POOLING In this section, the algorithm of multimodal compact bilinear pooling (MCB) (Gao et al., 2016; Fukui et al., 2016) is described as a kind of hashing tick (Chen et al., 2015). x ∈ Rnx and y ∈ Rny are the given inputs, Φ(x,y) ∈ Rd is the output. Random variables hx ∈ Nnx and hy ∈ Nny are uniformly sampled from {1, . . . , d}, and sx ∈ Znx and sy ∈ Zny are uniformly sampled from {−1, 1}. Then, Count Sketch projection function Ψ (Charikar et al., 2002) projects x and y to intermediate representations Ψ(x,hx, sx) ∈ Rd and Ψ(y,hy, sy) ∈ Rd, which is defined as: Ψ(v,h, s)i := ∑ j:hj=i sj · vj (18) Notice that both h and s remain as constants after initialization (Fukui et al., 2016). The probability of hxj = i and hyj = i for the given j is 1/d2. Hence, the expected number of bilinear terms in Ψ(x,hx, sx)iΨ(y,hy, sy)i is (nxny)/d2. Since, the output Φ(x,y) is a result of circular convolution of Ψ(x,hx, sx) and Ψ(y,hy, sy), the expected number of bilinear terms in Φ(x,y)i is (nxny)/d. Likewise, the probability of that a bilinear term is allocated in Φ(x,y)i is 1/d. The probability distribution of the number of bilinear terms in Φ(x,y)i follows a multinomial distribution, whose mean is (nxny)/d and variance is (nxny)(d− 1)/d2. Linear projection after the multimodal compact bilinear pooling provides weights on the bilinear terms, in a way that a shared weight is assigned to Φ(x,y)i, which has (nxny)/d bilinear terms in expectation, though each bilinear term can have a different sign induced by both sx and sy . HashedNets (Chen et al., 2015) propose a method to compress neural networks using a low-cost hashing function (Weinberger et al., 2009), which is the same function of Ψ(v,h, s). They randomly group a portion of connections in neural networks to share a single weight. We speculate that multimodal compact bilinear pooling uses the hashing tick to reduce the number of full bilinear weights with the rate of d/(nxny). However, this approximation is limited to two-way interaction, compared with three-way factorization in our method. C REPLACEMENT OF LOW-RANK BILINEAR POOLING For the explicit comparison with compact bilinear pooling, we explicitly substitute compact bilinear pooling for low-rank bilinear pooling to control everything else, which means that the rest of the model architecture is exactly the same. According to Fukui et al. (2016), we use MCB followed by Signed Square Root, L2-Normalization, Dropout (p=0.1), and linear projection from 16,000-dimension to the target dimension. Also, Dropout (p=0.3) for a question embedding vector. Note that an overall architecture for multimodal learning of both is the same. Experimental details are referenced from the implementation 4 of Fukui et al. (2016). For test-dev split, our version of MCB gets 61.48% for overall accuracy (yes/no: 82.48%, number: 37.06%, and other: 49.07%) vs. 65.08% (ours, MLB in Table 1). Additionally, if the nonlinearity in getting attention distributions is increased as the original MCB does using ReLU, we get 62.11% for overall accuracy (yes/no: 82.55%, number: 37.18%, and other: 50.30%), which is still the below of our performance 5. We do not see it as a decisive evidence of the better performance of MLB, but as a reference (the comparison of test-dev results may be also unfair.), since an optimal architecture and hyperparameters may be required for each method. 4https://github.com/akirafukui/vqa-mcb 5Our version of MCB definition can be found in https://github.com/jnhwkim/MulLowBiVQA/ blob/master/netdef/MCB.lua D RELATED WORKS D.1 MULTIMODAL RESIDUAL NETWORKS MRN (Kim et al., 2016b) is an implicit attentional model using multimodal residual learning with Hadamard product which does not have any explicit attention mechanism. F (k)(q,v) = σ(W(k)q q) ◦ σ(W (k) 2 σ(W (k) 1 v)) (19) HL(q,v) = Wq′q + L∑ l=1 WF(l)F (l)(Hl−1,v) (20) where W∗ are parameter matrices, L is the number of learning blocks, H0 = q, Wq′ = ΠLl=1W (l) q′ , and WF(l) = Π L m=l+1W (m) q′ . Notice that these equations can be generalized by Equation 7. However, an explicit attention mechanism allows the use of lower-level visual features than fully-connected layers, and, more importantly, spatially selective learning. Recent state-of-the-art methods use a variant of an explicit attention mechanism in their models (Lu et al., 2016; Noh & Han, 2016; Fukui et al., 2016). Note that shortcut connections of MRN are not used in the proposed Multimodal Low-rank Bilinear (MLB) model. Since, it does not have any performance gain due to not stacking multiple layers in MLB. We leave the study of residual learning for MLB for future work, which may leverage the excellency of bilinear models as suggested in Wu et al. (2016a). D.2 HIGHER-ORDER BOLTZMANN MACHINES A similar model can be found in a study of Higher-Order Boltzmann Machines (Memisevic & Hinton, 2007; 2010). They suggest a factoring method for the three-way energy function to capture correlations among input, output, and hidden representations. −E(y,h;x) = ∑ f (∑ i xiw x if )(∑ j yjw y jf )(∑ k hkw h kf ) + ∑ k whkhk + ∑ j wyj yj = ( xTWx ◦ yTWy ◦ hTWh ) 1+ hTwh + yTwy (21) Setting aside of bias terms, the I × J ×K parameter tensor of unfactored Higher-Order Boltzmann Machines is replaced with three matrices, Wx ∈ RI×F , Wy ∈ RJ×F , and Wh ∈ RK×F . D.3 MULTIPLICATIVE INTEGRATION WITH RECURRENT NEURAL NETWORKS Most of recurrent neural networks, including vanilla RNNs, Long Short Term Memory networks (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (Cho et al., 2014), share a common expression as follows: φ(Wx + Uh + b) (22) where φ is a non-linear function, W ∈ Rd×n, x ∈ Rn, U ∈ Rd×m, h ∈ Rm, and b ∈ Rd is a bias vector. Note that, usually, x is an input state vector and h is an hidden state vector in recurrent neural networks. Wu et al. (2016c) propose a new design to replace the additive expression with a multiplicative expression using Hadamard product as φ(Wx ◦Uh + b). (23) Moreover, a general formulation of this multiplicative integration can be described as φ(α ◦Wx ◦Uh + Wx ◦ β1 + Uh ◦ β2 + b) (24) which is reminiscent of full model in Section 3.1.
1. What are the strengths and contributions of the paper's approach to the VQA task? 2. What are the limitations and areas for improvement in the proposed method? 3. Can the authors provide more explanation or justification for their choices of hyperparameters, such as the embedding dimension and output dimension? 4. How does the proposed model differ from the compact bilinear model it is compared to, especially regarding the use of word order and visual attention? 5. Are there any potential avenues for future research or improvements to the current approach?
Review
Review Results on the VQA task are good for this simple model, the ablation study of table 1 gives some insights as to what is important. Missing are some explanations about the language embedding and the importance in deciding embedding dimension and final output dimension, equivalent to deciding the projected dimension in the compact bilinear model. Since the main contribution of the paper seems to be slightly better performance with fairly large reduction in parameters vs. compact bilinear something should be said about choice of those hyper parameters. If you increase embedded and output dimensions to equalize parameters to the compact bilinear model are further gains possible? How is the question encoded? Is word order preserved in this encoding, the compact bilinear model compared to in table 1 mentions glove, the proposed model is using this as well? The meaning of visual attention in this model along with the number of glimpses should be tied to the sentence embedding, so now we are looking at particular spatial components when that part of the sentence is encoded, then we stack according to your equation 9?
ICLR
Title NeuralStagger: accelerating physics constrained neural PDE solver with spatial-temporal decomposition Abstract Neural networks have shown great potential in accelerating the solution of partial differential equations (PDEs). Recently, there has been a growing interest in introducing physics constraints into training neural PDE solvers to reduce the use of costly data and improve the generalization ability. However, these physics constraints, based on certain finite dimensional approximation over the function space, must resolve the smallest scaled physics to ensure the accuracy and stability of the simulation, resulting in heavy computational costs from large input, output, and neural networks. This paper proposes a general acceleration methodology called NeuralStagger by spatially and temporally decomposing the original learning tasks into several coarser-resolution subtasks. We define a coarse-resolution neural solver for each subtask, which requires fewer computational resources, and jointly train them with the vanilla physics constrained loss by simply arranging their outputs to reconstruct the original solution. Due to the perfect parallelism between them, the solution is achieved as fast as a coarse-resolution neural solver. In addition, the trained solvers bring the flexibility for users to simulate with multiple levels of resolution. We demonstrate the successful application of NeuralStagger on various fluid dynamics simulations, which leads to an additional 10 to 100 times speed-up. Moreover, the experiment also shows that the learned model could be well used for optimal control. 1 INTRODUCTION Partial differential equations (PDEs) are the critical parts of scientific research, describing vast categories of physical and chemical phenomena, e.g. sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, and so on. In the era of artificial intelligence, neural PDE solvers, in some works called neural operators, are widely studied as a promising technology to solve PDEs (Guo et al., 2016; Zhu & Zabaras, 2018; Hsieh et al., 2019; Bhatnagar et al., 2019; Bar-Sinai et al., 2019; Berner et al., 2020; Li et al., 2020b;a; Um et al., 2020; Pfaff et al., 2020; Lu et al., 2021b; Wang et al., 2021; Kochkov et al., 2021). Once the neural solver is trained, it can solve unseen PDEs with only an inference step, multiple magnitudes faster than that with traditional numerical solvers. Recently, several works have introduced physics constraints in training the neural PDE solvers in order to reduce the use of costly data and improve the generalization ability. They define the physics constrained loss with certain finite dimensional approximations to transform the PDEs into algebraic equations, which are further used to define the loss function (Zhu et al., 2019; Geneva & Zabaras, 2020; Wandel et al., 2020; Shi et al., 2022). However, to ensure stability and accuracy, they must define the loss in a relatively high resolution to resolve the smallest-scale physics in the PDE, resulting in huge input and output as well as increased neural network size. The solution by the neural network inference might still be slow, but it seems impossible to get further accelerations as the bottleneck comes from the input and output complexity. In this paper, we propose a simple methodology called NeuralStagger to jump out of the dilemma. The basic idea is to evenly decompose the original physical fields into several coarser-resolution fields. Then we jointly train a lightweight neural network to predict the solution in each coarseresolution field respectively, which can be naturally a coarse-resolution neural solver to the original PDE. We design the decomposition rules so that the outputs of these lightweight networks can re- construct the solutions in the original field with simple arrangements. For ease of reading, here and also in most parts of the paper, we illustrate the decomposition methodology in the 2-dimensional example with regular mesh and finite difference approximation. Figure 1 (top) shows the physical field in a 4 × 4 mesh is decomposed into 4 coarser-resolution fields, each of which is handled by a small neural network. We could also do similar things along the temporal dimension, as is shown in Figure 1 (bottom). The group of coarse-resolution solvers as well as the decomposition and reconstruction operations can be seen as an end-to-end neural PDE solver, which can be trained with the physics constrained loss that resolves small-scale physics in a sufficiently high resolution. Because the neural networks can run in parallel, the original simulation is achieved as fast as a coarse-resolution neural solver. In addition, the trained neural networks can predict the PDE’s solution in various levels of resolution, ranging from the resolution of the individual coarse-resolution solver to the resolution of the physics constrained loss by the combination of all these solvers. We believe that such flexibility is vital in balancing the computational resources and the resolution. We demonstrate the effectiveness of the NeuralStagger in the Navier-Stokes equation with three parametric settings, e.g., periodic boundary conditions with varied initial conditions, lid-driven cavity boundary conditions with varied initial conditions, and the flow around the obstacle with varied obstacles and initial conditions. We find that with NeuralStagger, the learned networks can conduct accurate and stable simulation with 10∼100 times speed-up over SOTA neural PDE solvers. In addition, we demonstrate that they can accurately tackle the optimal control task with autodifferentiation. Our contributions can be summarized in three parts: • We propose a general methodology called NeuralStagger to accelerate neural PDE solving by spatially and temporally decomposing the learning task and running a group of coarseresolution solvers in parallelism. • The learned network group can provide solutions in multiple resolutions from the coarsest one by a single network to the original resolution, which provides the flexibility to balance the computational resources and the resolution. • Empirically, we demonstrate that the methodology leads to 10 to 100 times speed-up over SOTA neural PDE solvers as well as the efficient solution on optimal control. In the following sections, we first briefly summarize the related works in Section 2 and then introduce the preliminaries and the proposed NeuralStagger in Section 3. To showcase the efficiency and accuracy of the proposed method, we present the settings of the experiment and results in Section 4. Finally, we conclude and discuss the future work in Section 5. 2 RELATED WORK In general, two mainstream approaches have been widely used for solving PDEs. The first is to approximate the PDE’s solution function with neural networks (Raissi et al., 2019; 2020; Jin et al., 2021). They have proved to be successful in tackling high-dimensional problems and inverse problems. The second is to learn a PDE solver to solve parametric PDEs. The neural PDE solver can learn the solutions of a class of PDEs, and thus can generalize to PDEs with different parameters. Our work is mainly about the accelerating the second type. Many impressive works have been done to improve the neural solver for parametric PDEs in terms of neural network design, e.g., convolutional neural network (Guo et al., 2016; Tompson et al., 2017; Bhatnagar et al., 2019), graph neural networks (Pfaff et al., 2020), the multipole graph kernel (Li et al., 2020b), Fourier neural operators (Li et al., 2020a; Guibas et al., 2021), the message passing neural network (Brandstetter et al., 2022b), deepOnet (Lu et al., 2021a), Clifford neural networks (Brandstetter et al., 2022a) and so on. After being trained with pre-generated simulated data and labels, they can solve the PDE several magnitudes faster than conventional numerical solvers with competitive accuracy. Recently there are raising concerns about the cost of collecting training data and the generalization ability, so several works have introduced the physics constrained loss for training. For example, (Wang et al., 2021) combined the DeepOnet with a physics-informed way to improve the sample efficiency. Zhu et al. (2019) proposed physics constrained loss for high-dimensional surrogate modeling and (Geneva & Zabaras, 2020) introduced the use of a physics constrained framework to achieve the data-free training in the case of Burgers equations. Wandel et al. (2020; 2021) proposed the physics constrained loss based on the certain approximation of the Navier-Stokes equation to solve fluidlike flow problems. Shi et al. (2022) proposed a general physics constrained loss called mean square residual (MSR) loss as well as a neural network called LordNet for better performance. However, the physics constrained loss by certain approximations require the approximation to be sufficiently close to the continuous version, resulting in a relatively high-resolution discretization. Thus in complex and large-scale problems, the neural solver must be large enough for expressiveness and its inference would still be slow. Although some works (Wang et al., 2021) directly calculate the derivatives via back-propagation through the neural network, they are known to have similar training problems as PINN, e.g., converging to trivial solutions. Interestingly in the case of regular mesh, the proposed spatial decomposition is the same in the implementation as ‘pixel shuffle’ from computer vision. There are a huge number of works in this direction, but the most related one might be (Ren et al., 2022) which leverages pixel shuffle and physics constrained loss in the super-resolution task. However, we are fundamentally different in target and solution. For example, we train multiple solvers to work in full parallelism and obtain the solution in multiple levels of resolution without training them again. We also find similar treatment on meshes in classical numerical methods, e.g., staggered-mesh and leap-frog integration. However, they are also fundamentally different in target and implementation. The numerical methods often place meshes of multiple fields with offsets to get more accurate approximation while NeuralStagger splits the mesh of every single field into multiple sub-meshes for defining the independent subtasks. In addition, they are orthogonal to NeuralStagger, i.e., one can leverage both the staggered-mesh to define the physics constrained loss and NeuralStagger to train multiple coarse-resolution solvers at the same time, as is done in our experiment. 3 METHODOLOGY 3.1 PRELIMINARIES Consider a connected domain Ω ⊆ Rn with boundary ∂Ω, and let (A,U ,V) be separable Banach spaces. Then the parametric PDEs can be defined as the form S(u,a)(x) = 0, x ∈ Ω (1) where S : U × A → V is a linear or nonlinear differential operator, a ∈ A denotes the parameters under certain distribution µ, such as coefficient functions or boundary/initial conditions, and u ∈ U is the corresponding unknown solution function. Further, we can define the solution operator of the parametric PDE G : A → U , which maps two infinite-dimensional function spaces. A main branch of works in neural PDE solvers approximate the solution operator by discretizing the functions into finite dimensional spaces denoted by  and Û and learning the mapping fθ :  → Û . Correspondingly, we have the discretized version of the PDE’s operator S by certain finite-dimensional approximations such as the finite difference method (FDM) and finite element method (FEM), which is denoted by Ŝ. We denote the vector of the function values in a mesh with the hat symbol, e.g., â is the vector of the PDE’s parameter a ∼ µ. Then the physics constrained loss is defined by forcing the predicted solution û ∈ Û to satisfy Ŝ given â ∈ Â. For example, LordNet (Shi et al., 2022) proposed the general form with the mean squared error as follows, L(θ) = Ea∼µ||Ŝ(fθ(â), â)||2, (2) In this paper, we mainly focus on time-dependent problems as follows, S(u,a)(t,x) = 0, (t,x) ∈ [0, T ]× Ω (3) The temporal dimension is discretized with the timestep ∆t and the neural solver solves the PDE in an auto-regressive way, ût+∆t = fθ(ût, â) (4) where ût is the corresponding discretized vector of the function u at time t. Figure 2 shows an example with a 4 × 4 rectangle mesh. Notice that similar to traditional numerical methods, the resolution of the finite-dimensional approximation in physics constrained loss, either in the spatial dimension or in the temporal dimension, must be sufficiently high, otherwise, the approximation error will be too large to guide the neural PDE solver. This leads to huge input and output as well as large neural networks to ensure expressiveness, whose inference would also be slow. 3.2 NEURALSTAGGER We propose a general methodology called NeuralStagger to gain further accelerations by exploiting the potential parallelism in the neural PDE solver. NeuralStagger decomposes the original learning task that maps ût to ût+∆t into several parallelizable subtasks in both spatial and temporal dimensions. The meshes of the subtasks spread evenly in the original field and stagger with each other. Then we can handle each subtask with a computationally cheap neural network. The decomposition strategy is introduced as follows. Spatial decomposition. The upper part of Figure 1 shows the 2-dimensional example with regular mesh. We first split the grid into patches of the size sH × sW and construct a subgrid by selecting only one point in each patch, resulting in sH ×sW subgrids evenly spread in the domain. We denote the functions in each sub-grid as ûi,jt and â i,j t where i and j represents the relative position of the sub-grid in horizontal and vertical directions. Then we use sH × sW neural networks to learn to predict the solution at t+∆t as follows, ûi,jt+∆t = fθi,j (û i,j t , â i,j), (5) where fθi,j is the neural network for the sub-grid at the position (i, j). The outputs û i,j t+∆t compose the solution at the original grid. Then the neural networks can be jointly trained with the physics constrained loss defined on the original grid. Notice that the neural networks are independent of each other and can be fully paralleled. As the input and output decrease by sH × sW times, the neural network can be much smaller and faster than the original one to be used for the neural solver. The decomposition rules can be extended to higher-dimensional cases. In addition, the learning tasks at the subgrids are quite close to each other, except for the difference in the boundary of the domain, so we share the parameters of the neural networks fθi,j to reduce redundancy and accelerate training. Meanwhile, because there are often tiny differences between the inputs of the subtasks, we encourage the neural network to distinguish them by adding positional information of each grid point as additional input channels. Temporal decomposition. We can treat the temporal dimension as a 1-dimensional grid with a fixed step ∆t. Thus we can also decompose the grid into sT sub-grids by selecting a point for every sT points, where instead of predicting ût+∆t, the neural network predicts ût+sT∆t, ût+sT∆t = fθ (ût, â) , (6) Given the solution sequence from t to t + (sT − 1)∆t denoted by ût,sT for simplicity, we can get the next sequence of the solution ût+sT∆t,sT . Then the physics constrained loss is defined on the sequence with timestep ∆t, as is shown in the lower part of Figure 1. Once the neural network is trained, we can generate the sequence ût+sT∆t,sT by running the neural network inference of Formula 6 with sT threads in parallel with inputs ût,sT . The non-auto-regressive process can generate the solution in sT time steps within one inference step, which can be much faster than the original version (Figure 2) with sT inference steps. Note that though we only need the initial condition for the coarsest-resolution test, we must prepare the first sT states with numerical solvers for training and the high-resolution test. However, this drawback is neglectful for long-time simulations. The spatial and temporal decompositions are orthogonal and can be used at the same time. We denote the joint decomposition operator as Ds, the transformation operator of the neural networks as FΘ and the reconstruction operator Es, where s represents all decomposition factors including sH , sW and sT , Θ represents all parameters of the neural network group. The physics constrained loss with the spatial-temporal decomposition can be written as, L(Θ) = Eût,sT ||Ŝ (Es (FΘ (Ds (ût,sT , â))) , ût,sT , â) || 2. (7) In addition, as the sub-grids spread evenly in the domain of the PDE, each of them can be seen as the down-sampled version of the original problem, where a local patch is reduced to the point at a fixed relative position in the patch. Therefore, the learned neural networks are naturally coarseresolution solvers to the PDE. Suppose (H,W, T ) is the tuple of the original height, width, and time span that the physics constrained loss is conducted on. Then the coarse-resolution solvers are conducted on the resolution ( HsH , W sW , TsT ). Meanwhile, we can infer multiple levels of resolutions ranging from that of coarse-resolution solvers to the original one, all of which can reach the same speed by parallelism. 3.3 CHOICE OF THE DECOMPOSITION FACTORS Obviously, the acceleration effect by NeuralStagger grows as we use larger sH , sW and sT . However, these decomposition factors cannot be arbitrarily large. We conclude two potential constraints, i.e., the increased complexity of the learning task and the information loss in the input. We would like to leverage the following 2-dimensional diffusion equation with the periodic boundary condition as an example to explain the two constraints, ∂u(x, y, t) ∂t = ∆u(x, y, t), x, y, t ∈ [0, 1], (8) u(x, y, 0) = f(x, y), x, y ∈ [0, 1], (9) where u is the density function of diffusing material, ∆ is the Laplacian operator and f is the function of the initial condition. We use the regular mesh with d points in total and leverage the central difference scheme with the spatial step ∆x and temporal step ∆t. Then the PDE is transformed into a matrix equation on the discretized solution at certain time t, denoted by ût ∈ Rd. Increased complexity of learning task. For the temporal dimension, we find that the larger decomposition factor might make the mapping from the input to the prediction more complex. For the linear diffusion equation, we can explicitly calculate the transfer matrix from ûi to ûi+∆t based on the matrix equation. Suppose the transfer matrix is Ti ∈ Rd×d. By iterative applying the transfer matrix, we can get the transformation from the initial condition û0 to the solution at any time step k as follows, ûk∆t = û0 k−1∏ 0 Ti. (10) For notational simplicity, we denote the resulting transfer matrix from û0 to ûk∆t as Tk. By certain arrangements, Tk is a band matrix where the non-zero values are centralized around the diagonal. The bandwidth indicates the sparsity of the matrix as well as how local the points in the mesh entangle with each other. We observe that the bandwidth grows linearly with regard to k. For example, Figure 3 shows the case of d = 642. When the k ≥ 60, the matrix is dense and every element in ûk∆t is a weighted summation of almost all the elements in ût. This indicates that increasing k may make the entanglements between the grid points more complex, leading to a harder learning task for the neural network. Information loss. By spatial decomposition, each subgrid only reserves a small part of the original grid. Obviously, it may introduce the problem of information loss if the dropped points are important for the prediction in the subtasks. Here we theoretically characterize the information loss caused by spatial decomposition under the linear model setting, i.e., f(ût) = ûtW ∗. Consider the diffusion equation and the corresponding matrix equation. With some abuse of notation, the superscript i denotes the index of training samples, such as ûit and the bold symbol without the superscript i denotes the matrix composed of all the samples, such as ût. With N training samples, the physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (11) where yi denotes the rest parts of the matrix equation. By applying spatial decomposition, the input and output are equally partitioned into K = sHsW subgrids {û1t , · · · , ûKt } and {û1t+1, · · · , ûKt+1}. Then according to the physics constrained loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (12) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. The next proposition shows a sufficient condition for equal prediction for Eq.(11) and Eq.(12). Proposition 1. If rank(ût) = rank(ûkt ), the model ûtW ∗ and ûktW ∗k will make the same prediction on yk. We put the proof in the appendix. In many physical scenarios, the local patches of size sHsW do not distribute arbitrarily in the ambient space RsHsW , but rather live in some low-dimensional manifold. Hence, there is much information redundancy in ût and with careful settings of sH and sW , the rank after the decomposition does not change much, indicating similar predictions on yk. With deep learning models fθ such as those we use in this paper, we believe that more complex local patterns can be resolved and the spatial factors can be set larger. 4 EXPERIMENTS To evaluate the acceleration effect and accuracy of the proposed method, we test three cases of fluid dynamics simulation governed by the Navier-Stokes equation. We first target two benchmark settings, i.e., the periodic boundary condition and the lid-driven cavity boundary condition (Zienkiewicz et al., 2006) In both settings, the initial condition changes, and the neural PDE solver learns to generalize to various initial conditions. Next, we test the more challenging case called flow around obstacles, where several obstacles are placed inside the flow. The neural PDE solver is trained to generalize to different obstacles as well as initial conditions. In addition, the state of the fluid changes quite a lot over time. To ensure the neural solver generalizes to various states, we must maintain a training pool to store states newly predicted during training. At last, we also evaluate the capability to the inverse problem, i.e., the optimal control on the flow-around-obstacles setting. In general, we consider the 2-dimensional incompressible Navier-Stokes equation as follows: ρ ( ∂v⃗ ∂t + (v⃗ · ∇)v⃗ ) = −∇p+ µ∆v⃗ + f⃗ (13) ∇ · v⃗ = 0 (14) where v⃗ is the fluid velocity field, p is the pressure field, µ is the viscosity, and f⃗ is the external force. In all experiments, we trained neural networks with Adam optimizer and decayed learning rates. The speed test is done on Nvidia A100 GPUs under the assumption that we have sufficient computational resources for each coarse-resolution solver. See Appendix Section 6.2 for more details. 4.1 PERIODIC AND LID-DRIVEN CAVITY BOUNDARY CONDITION We first test the Navier-Stokes equation with the periodic boundary condition and the lid-driven cavity boundary condition. In both cases, the physics constrained loss is obtained by discretizing the vorticity-stream equation with the central-difference scheme and the Crank-Nicolson method in the 64× 64 regular mesh. The time step ∆t is 1e− 2 and the viscosity ν is 1e− 3. We use the popular FNO (Li et al., 2020a) to test the accuracy and speed in different settings of decomposition factors. The ground truth is obtained by FDM. We evaluate the accuracy by auto-regressively running the inference of the neural solver across the target length along time LT and compare the terminal state with that from the ground truth. Note that we compare all the results on the original mesh and thus the spatially decomposed results reconstruct to the 64 × 64 resolution for evaluation. We measure with the relative error which is calculated by dividing the L2 norm of the error by the L2 norm of the ground truth. The measurement is denoted by Error-k where k is the number of time steps. Following the notations in Section 3.2, the decomposition factors along x dimension, z dimension and the temporal dimension are denoted by sW , sH and sT . In general, NeuralStagger achieves acceleration in both cases without losing much accuracy. As you can see in Figure 5, the coarseresolution solver is also accurate when applied alone without reconstruction. In the case of the periodic boundary condition, the target length along time LT equals 2, which is 200 time steps. The flow is driven by the external force f⃗ , which is introduced in the appendix. As you can see in Figure 4 (left), the relative errors of the learned neural solvers are lower than 0.2% in all settings of spatial and temporal decomposition factors. In terms of speed, with the most aggressive setting sT = 40, sH = sW = 2, and full parallelism, the inference time for the 200- time-steps simulation is 0.076 seconds on average. Compared to 0.36 seconds by the baseline without NeuralStagger, there is 47× speed-up. We can also observe some trends in accuracy with regard to the choice of spatial and temporal factors. Error1 grows like a linear function with the temporal factor sT in both spatial factor settings. The reason is that the learning task becomes more complex as we discuss in Section 3.3, and with the neural network unchanged, the accuracy drops. Meanwhile, the accumulated errors, i.e., Error200, almost keep at the same level. This is because the steps in the auto-regressive procedure reduce as sT grows, e.g., when sT = 40, the neural networks for subtasks only predict 200/40 = 5 steps ahead. The benefit perfectly neutralizes the detriment of the increased task complexity. In the case of the lid-driven cavity boundary condition, the fluid acts in a cavity consisting of three rigid walls with no-slip conditions and a lid moving with a steady tangential velocity 1. We set the length of time LT = 27, much larger than that with the periodic boundary, to see if the simulation converges to the right steady state. With larger LT , we try larger temporal skip factors such as sT = 108. As is shown in Figure 4 (right), the relative errors are all controlled below 0.5% even after 2700 time steps. Again, with the most aggressive setting sT = 108, sH = sW = 2 and full parallelism, the neural solver finishes the 2700-time-steps simulation within 0.038 seconds, about 119× faster than the baseline, i.e., 4.49 seconds. Different from the periodic boundary condition, the accuracy drops when we increase sT . The reason is that the increase of sT brings more detriments of task complexity than the benefits from the shorter auto-regressive sequence. 4.2 FLOW AROUND OBSTACLES In this section, we evaluate NeuralStagger in a larger and more complex setting called flow around obstacles. The setting is the same as that used in (Wandel et al., 2020), which is also our baseline. The fluid runs through a pipe, where we put different shapes of obstacles to affect the flow, including rotating cylinders and walls constructing a folded pipe. The external forces in Eq. 13 are neglected and set to 0. The neural solver is trained to generalize to different settings of the obstacles, including the shape and the velocity on the surface as well as the inflow/outflow velocities. Then we evaluate the neural solver in 5 randomly sampled configurations in both the cylinder case and the folded pipe case. You may refer to the appendix for more details. We leverage the same configurations as those in (Wandel et al., 2020) including the discretization method, the physics constrained loss, training strategies, the input features, the predicted variables as well as the evaluation metric. Specifically, the rectangular domain is discretized into a 100 × 300 regular mesh and ∆t = 4. The physics constrained loss is used as the evaluation metric, measuring to what extent the prediction at the next time step satisfies the PDE given the current fluid state and the boundary conditions. As the fields of the fluid change much over time, we maintain a training pool initialized with a set of initial conditions and incrementally enrich it as the training goes. This is achieved because the predictions from the neural network can be seen as new data if the neural network has been well fitted in the current pool. One can refer to (Wandel et al., 2020) for more details. Wandel et al. (2020) leverages U-net as the neural solver, but to demonstrate the full potential of NeuralStagger, we also try the other two neural network architectures, i.e., FNO and LordNet (Shi et al., 2022) which also leverages the physics constrained loss to train the neural PDE solver. We directly use the trained U-net from the official open-source repository of (Wandel et al., 2020) for evaluation and train FNO and LordNet from scratch. The experiments in Table 1 show that LordNet outperforms the other two neural networks in the baseline setting without NeuralStagger. Therefore, we use LordNet for further experiments on the choice of spatial and temporal factors. We find that in this case, the information from the 100 × 100 grid (sH = 1, sW = 3) is sufficient to achieve comparable results to the U-net baseline, while larger spatial steps will introduce too much information loss. In addition, it seems increasing the temporal factors hurts the accuracy more obviously than those in the periodic boundary condition and the lid-driven boundary condition, though the accuracy is still comparable to U-net even with sT = 16. We believe this is because the dataset is incrementally explored by maintaining a training pool and enriching it with the neural network’s predictions during training. However, the predictions may not be accurate. As the physics constrained loss is defined on ût+(sT−1)∆t and ût+sT∆t, inaccurate ût+(sT−1)∆t may mislead the neural network to the wrong direction. When we increase sT , more errors will be accumulated along the sequence from ût the ût+(sT−1)∆t and the training will be harder. Designing training algorithms to better support NeuralStagger remains unexplored and we leave it for future work. In terms of speed, the choices of spatial and temporal factors lead to different levels of acceleration, as is shown in Table 1, where GMACs (multiply-accumulate Operations) per card is the average computational load of simulation for 16 timesteps. Specifically, the largest factor configuration to keep the accuracy comparable to the baseline is sT = 16, sH = 1, sW = 3, leading to the largest decrease in GMACs per card, i.e., 1/32 of the baseline U-net and 1/48 of LordNet without NeuralStagger. Specifically, when tested with A100 cards, it leads to 28× speed-up over U-net and 17× over LordNet without NeuralStagger. 4.3 APPLICATION IN OPTIMAL CONTROL To further showcase the capability of the neural solver with NeuralStagger on the inverse problem, we conduct the optimal control experiment introduced in Wandel et al. (2020). The task is to change the flow speed to control the shedding frequency of a Kármán vortex street behind an obstacle. The shedding frequency is estimated by the frequency spectrum V (f) of the y-component of the velocity field behind the obstacle over 200 time steps, denoted by E [ |V (f)|2 ] . We define the loss function L = ( E [ |V (f)|2 ] − f̂ )2 , where f̂ is the target frequency. Then we compute the gradient of the velocity with regard to the loss by auto-differentiation through the neural solver and leverage Adam optimizer (Paszke et al., 2017; Kingma & Ba, 2014) to update the velocity. We compare the result of the learned model with the setting sH = 1, sW = 3, sT = 2 to that shown in Wandel et al. (2020). As is shown in Figure 6, the velocity controlled by LordNet converges to the target velocity with fewer iterations. 5 CONCLUSION AND LIMITATION We present NeuralStagger, a general framework for accelerating the neural PDE solver trained by physics constrained loss. By spatially and temporally decomposing the learning task and training multiple lightweight neural networks, the neural solver is better paralleled and much faster with sufficient computational resources. In addition, each lightweight neural network is naturally a coarseresolution solver and they bring the flexibility of producing the solutions on multiple levels of resolution, which is important for balancing the resolution and computational resources. We discuss the choice of decomposition factors and empirically test their influence on accuracy and speed. The experiments in fluid dynamics simulation show that NeuralStagger brings an additional 10 to 100× speed-up over SOTA neural PDE solvers with mild sacrifice on accuracy. There are also several limitations to be tackled in future works. Firstly, the accuracy drops with the growing decomposition factors. A potential solution would be introducing historical states in the neural network input to make up for the information loss. Secondly, we only define the spatial decomposition over regular meshes, while it turns to the non-trivial vertex coloring problem for irregular meshes. Heuristic coloring algorithms would be useful for this problem. Thirdly, our experiments only show the generalization to different initial conditions and boundary conditions. In the future, we would like to explore the generalization to different mesh sizes. 6 APPENDIX 6.1 INFORMATION LOSS CAUSED BY SPATIAL DECOMPOSITION In this section, we provide the proof to proposition 1 in the linear model setting. In this section, we will theoretically characterize the information loss caused by spatial decomposition under the linear model setting. Note that the proof is done on the 1-dimensional diffusion equation with the explicit method for ease of understanding, but as we will see, the conclusion is the same in the case with 2 dimensions or the implicit method. We consider a simple 1d partial differential equation with Dirichlet boundary condition: ∂tu = ∆u, x ∈ Ω (15) ut(x) = ft(x), x ∈ ∂Ω (16) Discretizing the function u on grid (x1, · · · , xd), we denote ûj = u(xj). We consider the finite difference discretization: ûjt+1 − û j t δt = (ûj+1t − û j t )− (û j t − û j−1 t ) δx2 , xj ̸= {x1, xd} (17) ûjt+1 = ft+1(xj), xj = {x1, xd} (18) Given the input ût ∈ Rd and output ût+∆t ∈ Rd, the output ût+∆t is parameterized by linear model as ût+∆t = ûtW where W ∈ Rd×d denotes the learned parameters. The physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (19) where i denotes the index of training samples and yj = ft+1(xj), xj = {x1, xd}; yj = ûjt − δt δx2 ( (ûj+1t − û j t )− (û j t − û j−1 t ) ) , xj ̸= {x1, xd}. By applying spatial decomposition, the input and output are equally partitioned into K blocks {û1t , · · · , ûKt } and {û1t+∆t, · · · , ûKt+∆t}. Each block contains d/K coordinates Then according to the MSR loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (20) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. Proof:We first consider the case that ∑N i=1(û i,k t ) τ ûi,kt is full rank. The minimizer of Eq.(20) is W ∗k = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1( ∑N i=1(û i,k t ) τyi,k). We denote the matrix A = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1, We construct a d×dmatrixB by lettingB(k+id/K, k+jd/K) = A(i, j), for i = 0, · · · , d/K; j = 0, · · · , d/K; otherwise, B(i, j) = 0. Then it is easy to check that the matrixB is the pseudo-inverse of ∑N i=1(û i t) τ ûit. The minimizer of Eq.(19) is (Bartlett et al., 2020)B( ∑N i=1(û t i) τyi). As the matrix B only has non-zero values on the coordinates that correspond to the k-th block, we have the k-the block of W ∗ equals W ∗k and other blocks equal zero matrices. Denoting the matrix composed of all the samples with the bold symbol without the superscript i such as ût for { ûit } and ûkt for { ûi,kt } , we have ∑N i=1(û i,k t ) τ ûi,kt = (û k t ) τ ûkt and ∑N i=1(û i t) τ ûit = (ût) τ ût. By Rank–nullity theorem, it is easy to see that rank((ût)τ ût) = rank(ût) and rank((ûkt ) τ ûkt ) = rank(û k t ). Then we get the results in the proposition. For the case that ∑N i=1(û t,k i ) τ ût,ki ≤ d/K, we can select its maximal linearly independent group to obtain its pseudo-inverse and apply similar analyses to get the results. In the case of the implicit method, the term ûitW in the physics constrained loss becomes û i tWV where V is an invertible matrix. This also does not change the conclusion. 6.2 IMPLEMENTATION DETAILS We implemented FNO with the original 2-dimensional version in the official repository, where we set the truncation mode to 12 and the width to 64. For the LordNet, we only stack 2 Lord modules and fix the channel count to 64 in all layers. In the position-wise embedding of the 2 Lord modules, we stack two 1×1 Convolutional layers, where the hidden embedding contains 256 and 128 channels separately, and GELU activation is used between the Convolutional layers. The implementation of Unet is based on the U-Net architecture (Ronneberger et al., 2015) with 20 hidden channels, which is consistent with that in (Wandel et al., 2020) The learning rates and training samples are described as follows. To keep out the potential influence of computational resources like cores and memory, we test the speed of NeuralStagger under the setting that each coarse-resolution solvers have sufficient resources to use. Therefore, we run each solver on Nvidia A100 GPUs with the batch size equals to 1. The time per step shown in Table 1 is calculated by dividing the inference time of the coarseresolution solver by the temporal factor sT . The time of decomposition and reconstruction is ignored because the operation supported by ‘pixel shuffle’ is super efficient. We also calculated GMACs (multiply-accumulate Operations) per card, which is the average computational load of simulation for 16 timesteps. Note that for the GMACs of FNO, we do not include the operation of Fourier transform. Periodic Boundary Condition We generate the data with random fields to generate a periodic function on a 64×64 grid with a time-step of 1e-2 where we record the solution every time step, where the external force is fixed f(x) = 0.1sin(2π(x + y)) + cos(2π(x + y)). For the perioidc boundary and lid-driven boundary conditions, we use the vorticity-stream function form of Eq. 13 as the physics-constrained loss. With the Helmholtz decomposition to Eq. 13, we rewrite the NavierStokes equation: ∂ω ∂t = ∂ψ ∂y ∂ω ∂x + ∂ψ ∂x ∂ω ∂y + 1 Re ( ∂2ω ∂x2 + ∂2ω ∂y2 ) (21) ∂2ψ ∂x2 + ∂2ψ ∂y2 = −ω, (22) where ω is the vorticity function, ψ is the stream function, and Re is the Reynolds number. The initial condition ω0 is generated by random field satisfying the distribution N ( 0, 83(−∆+ 64I)−4.0 ) . We use 6000 states for training. In this case, we use FNO to test NeuralStagger and decay the initial learning rate 3e-3 with a factor of 0.9 every 5000 iterations. Lid-driven Cavity boundary condition We generate data on a 64×64 but we train the neural network to predict the values of ψ inside the boundary, which is a 2-dimensional matrix of the shape (H − 2) × (W − 2). The random initial conditions are generated in the same way as the periodic boundary conditions. To make the initial state consistent with the boundary condition, we solve with the numerical solver for the first T0 = 1.98 and use ωT0 as the initial state. We use 8000 states for training with FNO, and decay the initial learning rate 3e-3 with a factor of 0.9 every 10000 iterations. Flow around Obstacles The data generation is the same as the setting used in (Wandel et al., 2020), where the resolution of the domain is 100×300, ∆t = 4, ρ = 4, µ = 0.1. In training, different types of environments are used including magnus, box, pipe, and wing. The locations and the velocity are variable during the training, e.g., the velocity is ranged from 0.0 to 1 m/s, the diameter of the obstacle is ranged from 10 to 40, and the coordinate x of the location is randomly from to 65 to 75 and the coordinate y of that is from 40 to 60. And then for the test, we randomly select the location and flow velocity to test and in our experiment, the Reynolds number of tests is 517. In this case, we train the model from scratch without any data for sT = 1. For sT > 1, we use the benchmark to pre-generate the initial sequence û0,sT for training. The learning rate is 1e-3 for Lordnet and 3e-3 for FNO, both with a factor of 0.9 every 5000 iterations. The quantitative comparison in this paper is conducted on a 100×300 grid. For the optimal control of vortex shedding experiment, the domain size is 100x300, and used the trained neural PDE solver based on the above training settings. The Reynolds number here is 880. The optimizer for both Unet and LordNet is Adam optimizer with a learning rate of 1e-3. 6.3 THE RESULTS OF THREE CASES WITH DIFFERENT SPATIAL-TEMPORAL FACTORS
1. What is the focus and contribution of the paper regarding fluid system dynamics prediction? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of spatial and temporal staggering? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Does the reviewer have any questions or concerns regarding the method's applicability, practicality, and relation to previous works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes to address the challenge of learning models to predict the dynamics of fluid systems with high resolutions by interleaving discretization points spatially and temporally. I.e., under the assumption that solution is sufficiently continuous, multiple evaluations of a network can be used and evaluated in parallel. While the central goal of the paper is an important one, and the idea seems attractive at first, I'm less convinced after seeing the results: there seem to be clear limitations in terms of spatial "staggering", and the temporal one primarily means taking larger time steps. The latter also seems responsible for the majority of the speed up. Strengths And Weaknesses In terms of positive aspects, the paper is clearly written, presents the results in a honest way. It also contains a good evaluation. On the negative side, as mentioned above, I do have concerns over the general applicability of the decomposition. The main algorithmic change is due to the different spatial networks. Once the solutions become less smooth, I would expect these networks to have a harder time with accurately reproducing the solutions, as acknowledged in the text. The examples shown in the paper however focus on relatively small resolution changes, and smooth solutions. The "temporal staggering" is fundamentally different in nature. This simply means training the network to directly produce solutions at a later time. It is not overly surprising (and was demonstrated in other papers) that a network can handle larger time steps and larger CFL conditions than regular solvers. It seems this is where most of the gains in performance come from. Also, on the practical side it is not overly attractive having to deal with multiple piece-wise evaluations of the network (but this could of course potentially be hidden with suitable wrapper code). Clarity, Quality, Novelty And Reproducibility The paper is clearly written for the largest part, and the idea is original to the best of my knowledge. My main concerns are with the general usefulness of the proposed method. Btw., I have a question for the authors here: how is the staggering related to the unsupervised training? Shouldn't it work similarly well if the data is precomputed using a supervised loss? The related work is a bit strange, in my opinion. The CNN-based approach here seems to be more in line with works such as "Accelerating Eulerian Fluid Simulation With Convolutional Networks" from 2017 rather than with PINNs. And the "neural operator" label is typically reserved for continuous descriptions (along the lines of fourier features), rather than for "everything else than PINNs".
ICLR
Title NeuralStagger: accelerating physics constrained neural PDE solver with spatial-temporal decomposition Abstract Neural networks have shown great potential in accelerating the solution of partial differential equations (PDEs). Recently, there has been a growing interest in introducing physics constraints into training neural PDE solvers to reduce the use of costly data and improve the generalization ability. However, these physics constraints, based on certain finite dimensional approximation over the function space, must resolve the smallest scaled physics to ensure the accuracy and stability of the simulation, resulting in heavy computational costs from large input, output, and neural networks. This paper proposes a general acceleration methodology called NeuralStagger by spatially and temporally decomposing the original learning tasks into several coarser-resolution subtasks. We define a coarse-resolution neural solver for each subtask, which requires fewer computational resources, and jointly train them with the vanilla physics constrained loss by simply arranging their outputs to reconstruct the original solution. Due to the perfect parallelism between them, the solution is achieved as fast as a coarse-resolution neural solver. In addition, the trained solvers bring the flexibility for users to simulate with multiple levels of resolution. We demonstrate the successful application of NeuralStagger on various fluid dynamics simulations, which leads to an additional 10 to 100 times speed-up. Moreover, the experiment also shows that the learned model could be well used for optimal control. 1 INTRODUCTION Partial differential equations (PDEs) are the critical parts of scientific research, describing vast categories of physical and chemical phenomena, e.g. sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, and so on. In the era of artificial intelligence, neural PDE solvers, in some works called neural operators, are widely studied as a promising technology to solve PDEs (Guo et al., 2016; Zhu & Zabaras, 2018; Hsieh et al., 2019; Bhatnagar et al., 2019; Bar-Sinai et al., 2019; Berner et al., 2020; Li et al., 2020b;a; Um et al., 2020; Pfaff et al., 2020; Lu et al., 2021b; Wang et al., 2021; Kochkov et al., 2021). Once the neural solver is trained, it can solve unseen PDEs with only an inference step, multiple magnitudes faster than that with traditional numerical solvers. Recently, several works have introduced physics constraints in training the neural PDE solvers in order to reduce the use of costly data and improve the generalization ability. They define the physics constrained loss with certain finite dimensional approximations to transform the PDEs into algebraic equations, which are further used to define the loss function (Zhu et al., 2019; Geneva & Zabaras, 2020; Wandel et al., 2020; Shi et al., 2022). However, to ensure stability and accuracy, they must define the loss in a relatively high resolution to resolve the smallest-scale physics in the PDE, resulting in huge input and output as well as increased neural network size. The solution by the neural network inference might still be slow, but it seems impossible to get further accelerations as the bottleneck comes from the input and output complexity. In this paper, we propose a simple methodology called NeuralStagger to jump out of the dilemma. The basic idea is to evenly decompose the original physical fields into several coarser-resolution fields. Then we jointly train a lightweight neural network to predict the solution in each coarseresolution field respectively, which can be naturally a coarse-resolution neural solver to the original PDE. We design the decomposition rules so that the outputs of these lightweight networks can re- construct the solutions in the original field with simple arrangements. For ease of reading, here and also in most parts of the paper, we illustrate the decomposition methodology in the 2-dimensional example with regular mesh and finite difference approximation. Figure 1 (top) shows the physical field in a 4 × 4 mesh is decomposed into 4 coarser-resolution fields, each of which is handled by a small neural network. We could also do similar things along the temporal dimension, as is shown in Figure 1 (bottom). The group of coarse-resolution solvers as well as the decomposition and reconstruction operations can be seen as an end-to-end neural PDE solver, which can be trained with the physics constrained loss that resolves small-scale physics in a sufficiently high resolution. Because the neural networks can run in parallel, the original simulation is achieved as fast as a coarse-resolution neural solver. In addition, the trained neural networks can predict the PDE’s solution in various levels of resolution, ranging from the resolution of the individual coarse-resolution solver to the resolution of the physics constrained loss by the combination of all these solvers. We believe that such flexibility is vital in balancing the computational resources and the resolution. We demonstrate the effectiveness of the NeuralStagger in the Navier-Stokes equation with three parametric settings, e.g., periodic boundary conditions with varied initial conditions, lid-driven cavity boundary conditions with varied initial conditions, and the flow around the obstacle with varied obstacles and initial conditions. We find that with NeuralStagger, the learned networks can conduct accurate and stable simulation with 10∼100 times speed-up over SOTA neural PDE solvers. In addition, we demonstrate that they can accurately tackle the optimal control task with autodifferentiation. Our contributions can be summarized in three parts: • We propose a general methodology called NeuralStagger to accelerate neural PDE solving by spatially and temporally decomposing the learning task and running a group of coarseresolution solvers in parallelism. • The learned network group can provide solutions in multiple resolutions from the coarsest one by a single network to the original resolution, which provides the flexibility to balance the computational resources and the resolution. • Empirically, we demonstrate that the methodology leads to 10 to 100 times speed-up over SOTA neural PDE solvers as well as the efficient solution on optimal control. In the following sections, we first briefly summarize the related works in Section 2 and then introduce the preliminaries and the proposed NeuralStagger in Section 3. To showcase the efficiency and accuracy of the proposed method, we present the settings of the experiment and results in Section 4. Finally, we conclude and discuss the future work in Section 5. 2 RELATED WORK In general, two mainstream approaches have been widely used for solving PDEs. The first is to approximate the PDE’s solution function with neural networks (Raissi et al., 2019; 2020; Jin et al., 2021). They have proved to be successful in tackling high-dimensional problems and inverse problems. The second is to learn a PDE solver to solve parametric PDEs. The neural PDE solver can learn the solutions of a class of PDEs, and thus can generalize to PDEs with different parameters. Our work is mainly about the accelerating the second type. Many impressive works have been done to improve the neural solver for parametric PDEs in terms of neural network design, e.g., convolutional neural network (Guo et al., 2016; Tompson et al., 2017; Bhatnagar et al., 2019), graph neural networks (Pfaff et al., 2020), the multipole graph kernel (Li et al., 2020b), Fourier neural operators (Li et al., 2020a; Guibas et al., 2021), the message passing neural network (Brandstetter et al., 2022b), deepOnet (Lu et al., 2021a), Clifford neural networks (Brandstetter et al., 2022a) and so on. After being trained with pre-generated simulated data and labels, they can solve the PDE several magnitudes faster than conventional numerical solvers with competitive accuracy. Recently there are raising concerns about the cost of collecting training data and the generalization ability, so several works have introduced the physics constrained loss for training. For example, (Wang et al., 2021) combined the DeepOnet with a physics-informed way to improve the sample efficiency. Zhu et al. (2019) proposed physics constrained loss for high-dimensional surrogate modeling and (Geneva & Zabaras, 2020) introduced the use of a physics constrained framework to achieve the data-free training in the case of Burgers equations. Wandel et al. (2020; 2021) proposed the physics constrained loss based on the certain approximation of the Navier-Stokes equation to solve fluidlike flow problems. Shi et al. (2022) proposed a general physics constrained loss called mean square residual (MSR) loss as well as a neural network called LordNet for better performance. However, the physics constrained loss by certain approximations require the approximation to be sufficiently close to the continuous version, resulting in a relatively high-resolution discretization. Thus in complex and large-scale problems, the neural solver must be large enough for expressiveness and its inference would still be slow. Although some works (Wang et al., 2021) directly calculate the derivatives via back-propagation through the neural network, they are known to have similar training problems as PINN, e.g., converging to trivial solutions. Interestingly in the case of regular mesh, the proposed spatial decomposition is the same in the implementation as ‘pixel shuffle’ from computer vision. There are a huge number of works in this direction, but the most related one might be (Ren et al., 2022) which leverages pixel shuffle and physics constrained loss in the super-resolution task. However, we are fundamentally different in target and solution. For example, we train multiple solvers to work in full parallelism and obtain the solution in multiple levels of resolution without training them again. We also find similar treatment on meshes in classical numerical methods, e.g., staggered-mesh and leap-frog integration. However, they are also fundamentally different in target and implementation. The numerical methods often place meshes of multiple fields with offsets to get more accurate approximation while NeuralStagger splits the mesh of every single field into multiple sub-meshes for defining the independent subtasks. In addition, they are orthogonal to NeuralStagger, i.e., one can leverage both the staggered-mesh to define the physics constrained loss and NeuralStagger to train multiple coarse-resolution solvers at the same time, as is done in our experiment. 3 METHODOLOGY 3.1 PRELIMINARIES Consider a connected domain Ω ⊆ Rn with boundary ∂Ω, and let (A,U ,V) be separable Banach spaces. Then the parametric PDEs can be defined as the form S(u,a)(x) = 0, x ∈ Ω (1) where S : U × A → V is a linear or nonlinear differential operator, a ∈ A denotes the parameters under certain distribution µ, such as coefficient functions or boundary/initial conditions, and u ∈ U is the corresponding unknown solution function. Further, we can define the solution operator of the parametric PDE G : A → U , which maps two infinite-dimensional function spaces. A main branch of works in neural PDE solvers approximate the solution operator by discretizing the functions into finite dimensional spaces denoted by  and Û and learning the mapping fθ :  → Û . Correspondingly, we have the discretized version of the PDE’s operator S by certain finite-dimensional approximations such as the finite difference method (FDM) and finite element method (FEM), which is denoted by Ŝ. We denote the vector of the function values in a mesh with the hat symbol, e.g., â is the vector of the PDE’s parameter a ∼ µ. Then the physics constrained loss is defined by forcing the predicted solution û ∈ Û to satisfy Ŝ given â ∈ Â. For example, LordNet (Shi et al., 2022) proposed the general form with the mean squared error as follows, L(θ) = Ea∼µ||Ŝ(fθ(â), â)||2, (2) In this paper, we mainly focus on time-dependent problems as follows, S(u,a)(t,x) = 0, (t,x) ∈ [0, T ]× Ω (3) The temporal dimension is discretized with the timestep ∆t and the neural solver solves the PDE in an auto-regressive way, ût+∆t = fθ(ût, â) (4) where ût is the corresponding discretized vector of the function u at time t. Figure 2 shows an example with a 4 × 4 rectangle mesh. Notice that similar to traditional numerical methods, the resolution of the finite-dimensional approximation in physics constrained loss, either in the spatial dimension or in the temporal dimension, must be sufficiently high, otherwise, the approximation error will be too large to guide the neural PDE solver. This leads to huge input and output as well as large neural networks to ensure expressiveness, whose inference would also be slow. 3.2 NEURALSTAGGER We propose a general methodology called NeuralStagger to gain further accelerations by exploiting the potential parallelism in the neural PDE solver. NeuralStagger decomposes the original learning task that maps ût to ût+∆t into several parallelizable subtasks in both spatial and temporal dimensions. The meshes of the subtasks spread evenly in the original field and stagger with each other. Then we can handle each subtask with a computationally cheap neural network. The decomposition strategy is introduced as follows. Spatial decomposition. The upper part of Figure 1 shows the 2-dimensional example with regular mesh. We first split the grid into patches of the size sH × sW and construct a subgrid by selecting only one point in each patch, resulting in sH ×sW subgrids evenly spread in the domain. We denote the functions in each sub-grid as ûi,jt and â i,j t where i and j represents the relative position of the sub-grid in horizontal and vertical directions. Then we use sH × sW neural networks to learn to predict the solution at t+∆t as follows, ûi,jt+∆t = fθi,j (û i,j t , â i,j), (5) where fθi,j is the neural network for the sub-grid at the position (i, j). The outputs û i,j t+∆t compose the solution at the original grid. Then the neural networks can be jointly trained with the physics constrained loss defined on the original grid. Notice that the neural networks are independent of each other and can be fully paralleled. As the input and output decrease by sH × sW times, the neural network can be much smaller and faster than the original one to be used for the neural solver. The decomposition rules can be extended to higher-dimensional cases. In addition, the learning tasks at the subgrids are quite close to each other, except for the difference in the boundary of the domain, so we share the parameters of the neural networks fθi,j to reduce redundancy and accelerate training. Meanwhile, because there are often tiny differences between the inputs of the subtasks, we encourage the neural network to distinguish them by adding positional information of each grid point as additional input channels. Temporal decomposition. We can treat the temporal dimension as a 1-dimensional grid with a fixed step ∆t. Thus we can also decompose the grid into sT sub-grids by selecting a point for every sT points, where instead of predicting ût+∆t, the neural network predicts ût+sT∆t, ût+sT∆t = fθ (ût, â) , (6) Given the solution sequence from t to t + (sT − 1)∆t denoted by ût,sT for simplicity, we can get the next sequence of the solution ût+sT∆t,sT . Then the physics constrained loss is defined on the sequence with timestep ∆t, as is shown in the lower part of Figure 1. Once the neural network is trained, we can generate the sequence ût+sT∆t,sT by running the neural network inference of Formula 6 with sT threads in parallel with inputs ût,sT . The non-auto-regressive process can generate the solution in sT time steps within one inference step, which can be much faster than the original version (Figure 2) with sT inference steps. Note that though we only need the initial condition for the coarsest-resolution test, we must prepare the first sT states with numerical solvers for training and the high-resolution test. However, this drawback is neglectful for long-time simulations. The spatial and temporal decompositions are orthogonal and can be used at the same time. We denote the joint decomposition operator as Ds, the transformation operator of the neural networks as FΘ and the reconstruction operator Es, where s represents all decomposition factors including sH , sW and sT , Θ represents all parameters of the neural network group. The physics constrained loss with the spatial-temporal decomposition can be written as, L(Θ) = Eût,sT ||Ŝ (Es (FΘ (Ds (ût,sT , â))) , ût,sT , â) || 2. (7) In addition, as the sub-grids spread evenly in the domain of the PDE, each of them can be seen as the down-sampled version of the original problem, where a local patch is reduced to the point at a fixed relative position in the patch. Therefore, the learned neural networks are naturally coarseresolution solvers to the PDE. Suppose (H,W, T ) is the tuple of the original height, width, and time span that the physics constrained loss is conducted on. Then the coarse-resolution solvers are conducted on the resolution ( HsH , W sW , TsT ). Meanwhile, we can infer multiple levels of resolutions ranging from that of coarse-resolution solvers to the original one, all of which can reach the same speed by parallelism. 3.3 CHOICE OF THE DECOMPOSITION FACTORS Obviously, the acceleration effect by NeuralStagger grows as we use larger sH , sW and sT . However, these decomposition factors cannot be arbitrarily large. We conclude two potential constraints, i.e., the increased complexity of the learning task and the information loss in the input. We would like to leverage the following 2-dimensional diffusion equation with the periodic boundary condition as an example to explain the two constraints, ∂u(x, y, t) ∂t = ∆u(x, y, t), x, y, t ∈ [0, 1], (8) u(x, y, 0) = f(x, y), x, y ∈ [0, 1], (9) where u is the density function of diffusing material, ∆ is the Laplacian operator and f is the function of the initial condition. We use the regular mesh with d points in total and leverage the central difference scheme with the spatial step ∆x and temporal step ∆t. Then the PDE is transformed into a matrix equation on the discretized solution at certain time t, denoted by ût ∈ Rd. Increased complexity of learning task. For the temporal dimension, we find that the larger decomposition factor might make the mapping from the input to the prediction more complex. For the linear diffusion equation, we can explicitly calculate the transfer matrix from ûi to ûi+∆t based on the matrix equation. Suppose the transfer matrix is Ti ∈ Rd×d. By iterative applying the transfer matrix, we can get the transformation from the initial condition û0 to the solution at any time step k as follows, ûk∆t = û0 k−1∏ 0 Ti. (10) For notational simplicity, we denote the resulting transfer matrix from û0 to ûk∆t as Tk. By certain arrangements, Tk is a band matrix where the non-zero values are centralized around the diagonal. The bandwidth indicates the sparsity of the matrix as well as how local the points in the mesh entangle with each other. We observe that the bandwidth grows linearly with regard to k. For example, Figure 3 shows the case of d = 642. When the k ≥ 60, the matrix is dense and every element in ûk∆t is a weighted summation of almost all the elements in ût. This indicates that increasing k may make the entanglements between the grid points more complex, leading to a harder learning task for the neural network. Information loss. By spatial decomposition, each subgrid only reserves a small part of the original grid. Obviously, it may introduce the problem of information loss if the dropped points are important for the prediction in the subtasks. Here we theoretically characterize the information loss caused by spatial decomposition under the linear model setting, i.e., f(ût) = ûtW ∗. Consider the diffusion equation and the corresponding matrix equation. With some abuse of notation, the superscript i denotes the index of training samples, such as ûit and the bold symbol without the superscript i denotes the matrix composed of all the samples, such as ût. With N training samples, the physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (11) where yi denotes the rest parts of the matrix equation. By applying spatial decomposition, the input and output are equally partitioned into K = sHsW subgrids {û1t , · · · , ûKt } and {û1t+1, · · · , ûKt+1}. Then according to the physics constrained loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (12) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. The next proposition shows a sufficient condition for equal prediction for Eq.(11) and Eq.(12). Proposition 1. If rank(ût) = rank(ûkt ), the model ûtW ∗ and ûktW ∗k will make the same prediction on yk. We put the proof in the appendix. In many physical scenarios, the local patches of size sHsW do not distribute arbitrarily in the ambient space RsHsW , but rather live in some low-dimensional manifold. Hence, there is much information redundancy in ût and with careful settings of sH and sW , the rank after the decomposition does not change much, indicating similar predictions on yk. With deep learning models fθ such as those we use in this paper, we believe that more complex local patterns can be resolved and the spatial factors can be set larger. 4 EXPERIMENTS To evaluate the acceleration effect and accuracy of the proposed method, we test three cases of fluid dynamics simulation governed by the Navier-Stokes equation. We first target two benchmark settings, i.e., the periodic boundary condition and the lid-driven cavity boundary condition (Zienkiewicz et al., 2006) In both settings, the initial condition changes, and the neural PDE solver learns to generalize to various initial conditions. Next, we test the more challenging case called flow around obstacles, where several obstacles are placed inside the flow. The neural PDE solver is trained to generalize to different obstacles as well as initial conditions. In addition, the state of the fluid changes quite a lot over time. To ensure the neural solver generalizes to various states, we must maintain a training pool to store states newly predicted during training. At last, we also evaluate the capability to the inverse problem, i.e., the optimal control on the flow-around-obstacles setting. In general, we consider the 2-dimensional incompressible Navier-Stokes equation as follows: ρ ( ∂v⃗ ∂t + (v⃗ · ∇)v⃗ ) = −∇p+ µ∆v⃗ + f⃗ (13) ∇ · v⃗ = 0 (14) where v⃗ is the fluid velocity field, p is the pressure field, µ is the viscosity, and f⃗ is the external force. In all experiments, we trained neural networks with Adam optimizer and decayed learning rates. The speed test is done on Nvidia A100 GPUs under the assumption that we have sufficient computational resources for each coarse-resolution solver. See Appendix Section 6.2 for more details. 4.1 PERIODIC AND LID-DRIVEN CAVITY BOUNDARY CONDITION We first test the Navier-Stokes equation with the periodic boundary condition and the lid-driven cavity boundary condition. In both cases, the physics constrained loss is obtained by discretizing the vorticity-stream equation with the central-difference scheme and the Crank-Nicolson method in the 64× 64 regular mesh. The time step ∆t is 1e− 2 and the viscosity ν is 1e− 3. We use the popular FNO (Li et al., 2020a) to test the accuracy and speed in different settings of decomposition factors. The ground truth is obtained by FDM. We evaluate the accuracy by auto-regressively running the inference of the neural solver across the target length along time LT and compare the terminal state with that from the ground truth. Note that we compare all the results on the original mesh and thus the spatially decomposed results reconstruct to the 64 × 64 resolution for evaluation. We measure with the relative error which is calculated by dividing the L2 norm of the error by the L2 norm of the ground truth. The measurement is denoted by Error-k where k is the number of time steps. Following the notations in Section 3.2, the decomposition factors along x dimension, z dimension and the temporal dimension are denoted by sW , sH and sT . In general, NeuralStagger achieves acceleration in both cases without losing much accuracy. As you can see in Figure 5, the coarseresolution solver is also accurate when applied alone without reconstruction. In the case of the periodic boundary condition, the target length along time LT equals 2, which is 200 time steps. The flow is driven by the external force f⃗ , which is introduced in the appendix. As you can see in Figure 4 (left), the relative errors of the learned neural solvers are lower than 0.2% in all settings of spatial and temporal decomposition factors. In terms of speed, with the most aggressive setting sT = 40, sH = sW = 2, and full parallelism, the inference time for the 200- time-steps simulation is 0.076 seconds on average. Compared to 0.36 seconds by the baseline without NeuralStagger, there is 47× speed-up. We can also observe some trends in accuracy with regard to the choice of spatial and temporal factors. Error1 grows like a linear function with the temporal factor sT in both spatial factor settings. The reason is that the learning task becomes more complex as we discuss in Section 3.3, and with the neural network unchanged, the accuracy drops. Meanwhile, the accumulated errors, i.e., Error200, almost keep at the same level. This is because the steps in the auto-regressive procedure reduce as sT grows, e.g., when sT = 40, the neural networks for subtasks only predict 200/40 = 5 steps ahead. The benefit perfectly neutralizes the detriment of the increased task complexity. In the case of the lid-driven cavity boundary condition, the fluid acts in a cavity consisting of three rigid walls with no-slip conditions and a lid moving with a steady tangential velocity 1. We set the length of time LT = 27, much larger than that with the periodic boundary, to see if the simulation converges to the right steady state. With larger LT , we try larger temporal skip factors such as sT = 108. As is shown in Figure 4 (right), the relative errors are all controlled below 0.5% even after 2700 time steps. Again, with the most aggressive setting sT = 108, sH = sW = 2 and full parallelism, the neural solver finishes the 2700-time-steps simulation within 0.038 seconds, about 119× faster than the baseline, i.e., 4.49 seconds. Different from the periodic boundary condition, the accuracy drops when we increase sT . The reason is that the increase of sT brings more detriments of task complexity than the benefits from the shorter auto-regressive sequence. 4.2 FLOW AROUND OBSTACLES In this section, we evaluate NeuralStagger in a larger and more complex setting called flow around obstacles. The setting is the same as that used in (Wandel et al., 2020), which is also our baseline. The fluid runs through a pipe, where we put different shapes of obstacles to affect the flow, including rotating cylinders and walls constructing a folded pipe. The external forces in Eq. 13 are neglected and set to 0. The neural solver is trained to generalize to different settings of the obstacles, including the shape and the velocity on the surface as well as the inflow/outflow velocities. Then we evaluate the neural solver in 5 randomly sampled configurations in both the cylinder case and the folded pipe case. You may refer to the appendix for more details. We leverage the same configurations as those in (Wandel et al., 2020) including the discretization method, the physics constrained loss, training strategies, the input features, the predicted variables as well as the evaluation metric. Specifically, the rectangular domain is discretized into a 100 × 300 regular mesh and ∆t = 4. The physics constrained loss is used as the evaluation metric, measuring to what extent the prediction at the next time step satisfies the PDE given the current fluid state and the boundary conditions. As the fields of the fluid change much over time, we maintain a training pool initialized with a set of initial conditions and incrementally enrich it as the training goes. This is achieved because the predictions from the neural network can be seen as new data if the neural network has been well fitted in the current pool. One can refer to (Wandel et al., 2020) for more details. Wandel et al. (2020) leverages U-net as the neural solver, but to demonstrate the full potential of NeuralStagger, we also try the other two neural network architectures, i.e., FNO and LordNet (Shi et al., 2022) which also leverages the physics constrained loss to train the neural PDE solver. We directly use the trained U-net from the official open-source repository of (Wandel et al., 2020) for evaluation and train FNO and LordNet from scratch. The experiments in Table 1 show that LordNet outperforms the other two neural networks in the baseline setting without NeuralStagger. Therefore, we use LordNet for further experiments on the choice of spatial and temporal factors. We find that in this case, the information from the 100 × 100 grid (sH = 1, sW = 3) is sufficient to achieve comparable results to the U-net baseline, while larger spatial steps will introduce too much information loss. In addition, it seems increasing the temporal factors hurts the accuracy more obviously than those in the periodic boundary condition and the lid-driven boundary condition, though the accuracy is still comparable to U-net even with sT = 16. We believe this is because the dataset is incrementally explored by maintaining a training pool and enriching it with the neural network’s predictions during training. However, the predictions may not be accurate. As the physics constrained loss is defined on ût+(sT−1)∆t and ût+sT∆t, inaccurate ût+(sT−1)∆t may mislead the neural network to the wrong direction. When we increase sT , more errors will be accumulated along the sequence from ût the ût+(sT−1)∆t and the training will be harder. Designing training algorithms to better support NeuralStagger remains unexplored and we leave it for future work. In terms of speed, the choices of spatial and temporal factors lead to different levels of acceleration, as is shown in Table 1, where GMACs (multiply-accumulate Operations) per card is the average computational load of simulation for 16 timesteps. Specifically, the largest factor configuration to keep the accuracy comparable to the baseline is sT = 16, sH = 1, sW = 3, leading to the largest decrease in GMACs per card, i.e., 1/32 of the baseline U-net and 1/48 of LordNet without NeuralStagger. Specifically, when tested with A100 cards, it leads to 28× speed-up over U-net and 17× over LordNet without NeuralStagger. 4.3 APPLICATION IN OPTIMAL CONTROL To further showcase the capability of the neural solver with NeuralStagger on the inverse problem, we conduct the optimal control experiment introduced in Wandel et al. (2020). The task is to change the flow speed to control the shedding frequency of a Kármán vortex street behind an obstacle. The shedding frequency is estimated by the frequency spectrum V (f) of the y-component of the velocity field behind the obstacle over 200 time steps, denoted by E [ |V (f)|2 ] . We define the loss function L = ( E [ |V (f)|2 ] − f̂ )2 , where f̂ is the target frequency. Then we compute the gradient of the velocity with regard to the loss by auto-differentiation through the neural solver and leverage Adam optimizer (Paszke et al., 2017; Kingma & Ba, 2014) to update the velocity. We compare the result of the learned model with the setting sH = 1, sW = 3, sT = 2 to that shown in Wandel et al. (2020). As is shown in Figure 6, the velocity controlled by LordNet converges to the target velocity with fewer iterations. 5 CONCLUSION AND LIMITATION We present NeuralStagger, a general framework for accelerating the neural PDE solver trained by physics constrained loss. By spatially and temporally decomposing the learning task and training multiple lightweight neural networks, the neural solver is better paralleled and much faster with sufficient computational resources. In addition, each lightweight neural network is naturally a coarseresolution solver and they bring the flexibility of producing the solutions on multiple levels of resolution, which is important for balancing the resolution and computational resources. We discuss the choice of decomposition factors and empirically test their influence on accuracy and speed. The experiments in fluid dynamics simulation show that NeuralStagger brings an additional 10 to 100× speed-up over SOTA neural PDE solvers with mild sacrifice on accuracy. There are also several limitations to be tackled in future works. Firstly, the accuracy drops with the growing decomposition factors. A potential solution would be introducing historical states in the neural network input to make up for the information loss. Secondly, we only define the spatial decomposition over regular meshes, while it turns to the non-trivial vertex coloring problem for irregular meshes. Heuristic coloring algorithms would be useful for this problem. Thirdly, our experiments only show the generalization to different initial conditions and boundary conditions. In the future, we would like to explore the generalization to different mesh sizes. 6 APPENDIX 6.1 INFORMATION LOSS CAUSED BY SPATIAL DECOMPOSITION In this section, we provide the proof to proposition 1 in the linear model setting. In this section, we will theoretically characterize the information loss caused by spatial decomposition under the linear model setting. Note that the proof is done on the 1-dimensional diffusion equation with the explicit method for ease of understanding, but as we will see, the conclusion is the same in the case with 2 dimensions or the implicit method. We consider a simple 1d partial differential equation with Dirichlet boundary condition: ∂tu = ∆u, x ∈ Ω (15) ut(x) = ft(x), x ∈ ∂Ω (16) Discretizing the function u on grid (x1, · · · , xd), we denote ûj = u(xj). We consider the finite difference discretization: ûjt+1 − û j t δt = (ûj+1t − û j t )− (û j t − û j−1 t ) δx2 , xj ̸= {x1, xd} (17) ûjt+1 = ft+1(xj), xj = {x1, xd} (18) Given the input ût ∈ Rd and output ût+∆t ∈ Rd, the output ût+∆t is parameterized by linear model as ût+∆t = ûtW where W ∈ Rd×d denotes the learned parameters. The physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (19) where i denotes the index of training samples and yj = ft+1(xj), xj = {x1, xd}; yj = ûjt − δt δx2 ( (ûj+1t − û j t )− (û j t − û j−1 t ) ) , xj ̸= {x1, xd}. By applying spatial decomposition, the input and output are equally partitioned into K blocks {û1t , · · · , ûKt } and {û1t+∆t, · · · , ûKt+∆t}. Each block contains d/K coordinates Then according to the MSR loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (20) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. Proof:We first consider the case that ∑N i=1(û i,k t ) τ ûi,kt is full rank. The minimizer of Eq.(20) is W ∗k = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1( ∑N i=1(û i,k t ) τyi,k). We denote the matrix A = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1, We construct a d×dmatrixB by lettingB(k+id/K, k+jd/K) = A(i, j), for i = 0, · · · , d/K; j = 0, · · · , d/K; otherwise, B(i, j) = 0. Then it is easy to check that the matrixB is the pseudo-inverse of ∑N i=1(û i t) τ ûit. The minimizer of Eq.(19) is (Bartlett et al., 2020)B( ∑N i=1(û t i) τyi). As the matrix B only has non-zero values on the coordinates that correspond to the k-th block, we have the k-the block of W ∗ equals W ∗k and other blocks equal zero matrices. Denoting the matrix composed of all the samples with the bold symbol without the superscript i such as ût for { ûit } and ûkt for { ûi,kt } , we have ∑N i=1(û i,k t ) τ ûi,kt = (û k t ) τ ûkt and ∑N i=1(û i t) τ ûit = (ût) τ ût. By Rank–nullity theorem, it is easy to see that rank((ût)τ ût) = rank(ût) and rank((ûkt ) τ ûkt ) = rank(û k t ). Then we get the results in the proposition. For the case that ∑N i=1(û t,k i ) τ ût,ki ≤ d/K, we can select its maximal linearly independent group to obtain its pseudo-inverse and apply similar analyses to get the results. In the case of the implicit method, the term ûitW in the physics constrained loss becomes û i tWV where V is an invertible matrix. This also does not change the conclusion. 6.2 IMPLEMENTATION DETAILS We implemented FNO with the original 2-dimensional version in the official repository, where we set the truncation mode to 12 and the width to 64. For the LordNet, we only stack 2 Lord modules and fix the channel count to 64 in all layers. In the position-wise embedding of the 2 Lord modules, we stack two 1×1 Convolutional layers, where the hidden embedding contains 256 and 128 channels separately, and GELU activation is used between the Convolutional layers. The implementation of Unet is based on the U-Net architecture (Ronneberger et al., 2015) with 20 hidden channels, which is consistent with that in (Wandel et al., 2020) The learning rates and training samples are described as follows. To keep out the potential influence of computational resources like cores and memory, we test the speed of NeuralStagger under the setting that each coarse-resolution solvers have sufficient resources to use. Therefore, we run each solver on Nvidia A100 GPUs with the batch size equals to 1. The time per step shown in Table 1 is calculated by dividing the inference time of the coarseresolution solver by the temporal factor sT . The time of decomposition and reconstruction is ignored because the operation supported by ‘pixel shuffle’ is super efficient. We also calculated GMACs (multiply-accumulate Operations) per card, which is the average computational load of simulation for 16 timesteps. Note that for the GMACs of FNO, we do not include the operation of Fourier transform. Periodic Boundary Condition We generate the data with random fields to generate a periodic function on a 64×64 grid with a time-step of 1e-2 where we record the solution every time step, where the external force is fixed f(x) = 0.1sin(2π(x + y)) + cos(2π(x + y)). For the perioidc boundary and lid-driven boundary conditions, we use the vorticity-stream function form of Eq. 13 as the physics-constrained loss. With the Helmholtz decomposition to Eq. 13, we rewrite the NavierStokes equation: ∂ω ∂t = ∂ψ ∂y ∂ω ∂x + ∂ψ ∂x ∂ω ∂y + 1 Re ( ∂2ω ∂x2 + ∂2ω ∂y2 ) (21) ∂2ψ ∂x2 + ∂2ψ ∂y2 = −ω, (22) where ω is the vorticity function, ψ is the stream function, and Re is the Reynolds number. The initial condition ω0 is generated by random field satisfying the distribution N ( 0, 83(−∆+ 64I)−4.0 ) . We use 6000 states for training. In this case, we use FNO to test NeuralStagger and decay the initial learning rate 3e-3 with a factor of 0.9 every 5000 iterations. Lid-driven Cavity boundary condition We generate data on a 64×64 but we train the neural network to predict the values of ψ inside the boundary, which is a 2-dimensional matrix of the shape (H − 2) × (W − 2). The random initial conditions are generated in the same way as the periodic boundary conditions. To make the initial state consistent with the boundary condition, we solve with the numerical solver for the first T0 = 1.98 and use ωT0 as the initial state. We use 8000 states for training with FNO, and decay the initial learning rate 3e-3 with a factor of 0.9 every 10000 iterations. Flow around Obstacles The data generation is the same as the setting used in (Wandel et al., 2020), where the resolution of the domain is 100×300, ∆t = 4, ρ = 4, µ = 0.1. In training, different types of environments are used including magnus, box, pipe, and wing. The locations and the velocity are variable during the training, e.g., the velocity is ranged from 0.0 to 1 m/s, the diameter of the obstacle is ranged from 10 to 40, and the coordinate x of the location is randomly from to 65 to 75 and the coordinate y of that is from 40 to 60. And then for the test, we randomly select the location and flow velocity to test and in our experiment, the Reynolds number of tests is 517. In this case, we train the model from scratch without any data for sT = 1. For sT > 1, we use the benchmark to pre-generate the initial sequence û0,sT for training. The learning rate is 1e-3 for Lordnet and 3e-3 for FNO, both with a factor of 0.9 every 5000 iterations. The quantitative comparison in this paper is conducted on a 100×300 grid. For the optimal control of vortex shedding experiment, the domain size is 100x300, and used the trained neural PDE solver based on the above training settings. The Reynolds number here is 880. The optimizer for both Unet and LordNet is Adam optimizer with a learning rate of 1e-3. 6.3 THE RESULTS OF THREE CASES WITH DIFFERENT SPATIAL-TEMPORAL FACTORS
1. What is the focus and contribution of the paper regarding neural PDE solvers? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to existing ideas in numerical methods for solving PDEs? 3. Do you have any concerns or questions about the paper's presentation, such as the choice of notation or the lack of discussion on certain topics? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any specific aspects of the paper that the reviewer finds confusing or unclear, such as the description of the spatial decomposition method or the connection to CFL conditions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims to speed up neural PDE solvers. The strategy is to decompose the spatial and temporal domains into smaller problems and use multiple neural networks to learn solutions to smaller problems in parallel. The paper presented results on Navier-Stokes equations. Strengths And Weaknesses My overall feeling is that the paper is inadvertently revisiting a number of existing ideas in numerical methods for solving PDEs, and this paper gets a bit too excited about the speed obtained from this decomposition strategy but does not fully reveal its shortcomings to the readers. The staggered grid in Fig. 1 is not novel: Using central difference schemes in finite different methods often leads to these subgrids independent of each other. While this paper argues having small, independent subgrids are good for parallelism, classic numerical methods are much more concerned with its disadvantages not mentioned in the paper, e.g., high-frequency (in the spatial domain) errors are difficult to kill in such schemes. The spatial decomposition method seems tied to rectangular domains that can be perfectly discretized with lattice grids only. I don’t see an easy way to adapt this method to irregular boundaries or couple it with triangular/tetrahedral discretization. This should be made clear at the beginning or discussed in the limitation section. The staggered time integration idea can also be seen from classic numerical time integration schemes like Leapfrog integration/Verlet integration, but I don’t see a strong motivation for marrying it with neural networks: isn’t it actually much more difficult to predict the state after sT time steps than predict what will happen in the next time step? Clarity, Quality, Novelty And Reproducibility I will list my questions as I go through the paper: Related work: It looks like this section ignores classic numerical PDE solvers and claims “two mainstream approaches…for solving PDEs” are both neural network methods. It is probably a good idea to at least mention some numerical methods like finite differences/elements/volumes, method of lines, etc. In particular, two classic ideas that are highly relevant to what this paper proposes are domain decomposition and multigrid solvers, which I think are quite worth discussing. Sec. 3.2: Fig. 1 shows the coarse-resolution fields from spatial decomposition overlap each other, but the main text around Eqn. (5) seems to imply that spatial decomposition partitions the whole field into non-overlapping areas. Could you clarify which is the correct description of your method? Does sH x sW = 2 x 2 or 4 x 4 in your Fig. 1? I assumed that Fig. 1 is the factual description of the method when reviewing this paper. Sec. 3.3: At a high level, I feel that what this section attempts to develop is already covered in the standard CFL conditions of a numerical PDE method. It would be good if the section could make a connection to CFL conditions and comment on what is new here. Experiments: It looks like the experiments aim to show the model can generalize to different initial conditions, but I couldn’t find detailed descriptions of this. I would like to see how the initial conditions are generated (probably sampled from a random distribution?), and how different are them in the training and test sets. Maybe I missed some text in the supplemental material? A small thing: The notation “S(u, a)” is inconsistent with the definition “S: A x U”. Please swap either (u, a) or A x U.
ICLR
Title NeuralStagger: accelerating physics constrained neural PDE solver with spatial-temporal decomposition Abstract Neural networks have shown great potential in accelerating the solution of partial differential equations (PDEs). Recently, there has been a growing interest in introducing physics constraints into training neural PDE solvers to reduce the use of costly data and improve the generalization ability. However, these physics constraints, based on certain finite dimensional approximation over the function space, must resolve the smallest scaled physics to ensure the accuracy and stability of the simulation, resulting in heavy computational costs from large input, output, and neural networks. This paper proposes a general acceleration methodology called NeuralStagger by spatially and temporally decomposing the original learning tasks into several coarser-resolution subtasks. We define a coarse-resolution neural solver for each subtask, which requires fewer computational resources, and jointly train them with the vanilla physics constrained loss by simply arranging their outputs to reconstruct the original solution. Due to the perfect parallelism between them, the solution is achieved as fast as a coarse-resolution neural solver. In addition, the trained solvers bring the flexibility for users to simulate with multiple levels of resolution. We demonstrate the successful application of NeuralStagger on various fluid dynamics simulations, which leads to an additional 10 to 100 times speed-up. Moreover, the experiment also shows that the learned model could be well used for optimal control. 1 INTRODUCTION Partial differential equations (PDEs) are the critical parts of scientific research, describing vast categories of physical and chemical phenomena, e.g. sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, and so on. In the era of artificial intelligence, neural PDE solvers, in some works called neural operators, are widely studied as a promising technology to solve PDEs (Guo et al., 2016; Zhu & Zabaras, 2018; Hsieh et al., 2019; Bhatnagar et al., 2019; Bar-Sinai et al., 2019; Berner et al., 2020; Li et al., 2020b;a; Um et al., 2020; Pfaff et al., 2020; Lu et al., 2021b; Wang et al., 2021; Kochkov et al., 2021). Once the neural solver is trained, it can solve unseen PDEs with only an inference step, multiple magnitudes faster than that with traditional numerical solvers. Recently, several works have introduced physics constraints in training the neural PDE solvers in order to reduce the use of costly data and improve the generalization ability. They define the physics constrained loss with certain finite dimensional approximations to transform the PDEs into algebraic equations, which are further used to define the loss function (Zhu et al., 2019; Geneva & Zabaras, 2020; Wandel et al., 2020; Shi et al., 2022). However, to ensure stability and accuracy, they must define the loss in a relatively high resolution to resolve the smallest-scale physics in the PDE, resulting in huge input and output as well as increased neural network size. The solution by the neural network inference might still be slow, but it seems impossible to get further accelerations as the bottleneck comes from the input and output complexity. In this paper, we propose a simple methodology called NeuralStagger to jump out of the dilemma. The basic idea is to evenly decompose the original physical fields into several coarser-resolution fields. Then we jointly train a lightweight neural network to predict the solution in each coarseresolution field respectively, which can be naturally a coarse-resolution neural solver to the original PDE. We design the decomposition rules so that the outputs of these lightweight networks can re- construct the solutions in the original field with simple arrangements. For ease of reading, here and also in most parts of the paper, we illustrate the decomposition methodology in the 2-dimensional example with regular mesh and finite difference approximation. Figure 1 (top) shows the physical field in a 4 × 4 mesh is decomposed into 4 coarser-resolution fields, each of which is handled by a small neural network. We could also do similar things along the temporal dimension, as is shown in Figure 1 (bottom). The group of coarse-resolution solvers as well as the decomposition and reconstruction operations can be seen as an end-to-end neural PDE solver, which can be trained with the physics constrained loss that resolves small-scale physics in a sufficiently high resolution. Because the neural networks can run in parallel, the original simulation is achieved as fast as a coarse-resolution neural solver. In addition, the trained neural networks can predict the PDE’s solution in various levels of resolution, ranging from the resolution of the individual coarse-resolution solver to the resolution of the physics constrained loss by the combination of all these solvers. We believe that such flexibility is vital in balancing the computational resources and the resolution. We demonstrate the effectiveness of the NeuralStagger in the Navier-Stokes equation with three parametric settings, e.g., periodic boundary conditions with varied initial conditions, lid-driven cavity boundary conditions with varied initial conditions, and the flow around the obstacle with varied obstacles and initial conditions. We find that with NeuralStagger, the learned networks can conduct accurate and stable simulation with 10∼100 times speed-up over SOTA neural PDE solvers. In addition, we demonstrate that they can accurately tackle the optimal control task with autodifferentiation. Our contributions can be summarized in three parts: • We propose a general methodology called NeuralStagger to accelerate neural PDE solving by spatially and temporally decomposing the learning task and running a group of coarseresolution solvers in parallelism. • The learned network group can provide solutions in multiple resolutions from the coarsest one by a single network to the original resolution, which provides the flexibility to balance the computational resources and the resolution. • Empirically, we demonstrate that the methodology leads to 10 to 100 times speed-up over SOTA neural PDE solvers as well as the efficient solution on optimal control. In the following sections, we first briefly summarize the related works in Section 2 and then introduce the preliminaries and the proposed NeuralStagger in Section 3. To showcase the efficiency and accuracy of the proposed method, we present the settings of the experiment and results in Section 4. Finally, we conclude and discuss the future work in Section 5. 2 RELATED WORK In general, two mainstream approaches have been widely used for solving PDEs. The first is to approximate the PDE’s solution function with neural networks (Raissi et al., 2019; 2020; Jin et al., 2021). They have proved to be successful in tackling high-dimensional problems and inverse problems. The second is to learn a PDE solver to solve parametric PDEs. The neural PDE solver can learn the solutions of a class of PDEs, and thus can generalize to PDEs with different parameters. Our work is mainly about the accelerating the second type. Many impressive works have been done to improve the neural solver for parametric PDEs in terms of neural network design, e.g., convolutional neural network (Guo et al., 2016; Tompson et al., 2017; Bhatnagar et al., 2019), graph neural networks (Pfaff et al., 2020), the multipole graph kernel (Li et al., 2020b), Fourier neural operators (Li et al., 2020a; Guibas et al., 2021), the message passing neural network (Brandstetter et al., 2022b), deepOnet (Lu et al., 2021a), Clifford neural networks (Brandstetter et al., 2022a) and so on. After being trained with pre-generated simulated data and labels, they can solve the PDE several magnitudes faster than conventional numerical solvers with competitive accuracy. Recently there are raising concerns about the cost of collecting training data and the generalization ability, so several works have introduced the physics constrained loss for training. For example, (Wang et al., 2021) combined the DeepOnet with a physics-informed way to improve the sample efficiency. Zhu et al. (2019) proposed physics constrained loss for high-dimensional surrogate modeling and (Geneva & Zabaras, 2020) introduced the use of a physics constrained framework to achieve the data-free training in the case of Burgers equations. Wandel et al. (2020; 2021) proposed the physics constrained loss based on the certain approximation of the Navier-Stokes equation to solve fluidlike flow problems. Shi et al. (2022) proposed a general physics constrained loss called mean square residual (MSR) loss as well as a neural network called LordNet for better performance. However, the physics constrained loss by certain approximations require the approximation to be sufficiently close to the continuous version, resulting in a relatively high-resolution discretization. Thus in complex and large-scale problems, the neural solver must be large enough for expressiveness and its inference would still be slow. Although some works (Wang et al., 2021) directly calculate the derivatives via back-propagation through the neural network, they are known to have similar training problems as PINN, e.g., converging to trivial solutions. Interestingly in the case of regular mesh, the proposed spatial decomposition is the same in the implementation as ‘pixel shuffle’ from computer vision. There are a huge number of works in this direction, but the most related one might be (Ren et al., 2022) which leverages pixel shuffle and physics constrained loss in the super-resolution task. However, we are fundamentally different in target and solution. For example, we train multiple solvers to work in full parallelism and obtain the solution in multiple levels of resolution without training them again. We also find similar treatment on meshes in classical numerical methods, e.g., staggered-mesh and leap-frog integration. However, they are also fundamentally different in target and implementation. The numerical methods often place meshes of multiple fields with offsets to get more accurate approximation while NeuralStagger splits the mesh of every single field into multiple sub-meshes for defining the independent subtasks. In addition, they are orthogonal to NeuralStagger, i.e., one can leverage both the staggered-mesh to define the physics constrained loss and NeuralStagger to train multiple coarse-resolution solvers at the same time, as is done in our experiment. 3 METHODOLOGY 3.1 PRELIMINARIES Consider a connected domain Ω ⊆ Rn with boundary ∂Ω, and let (A,U ,V) be separable Banach spaces. Then the parametric PDEs can be defined as the form S(u,a)(x) = 0, x ∈ Ω (1) where S : U × A → V is a linear or nonlinear differential operator, a ∈ A denotes the parameters under certain distribution µ, such as coefficient functions or boundary/initial conditions, and u ∈ U is the corresponding unknown solution function. Further, we can define the solution operator of the parametric PDE G : A → U , which maps two infinite-dimensional function spaces. A main branch of works in neural PDE solvers approximate the solution operator by discretizing the functions into finite dimensional spaces denoted by  and Û and learning the mapping fθ :  → Û . Correspondingly, we have the discretized version of the PDE’s operator S by certain finite-dimensional approximations such as the finite difference method (FDM) and finite element method (FEM), which is denoted by Ŝ. We denote the vector of the function values in a mesh with the hat symbol, e.g., â is the vector of the PDE’s parameter a ∼ µ. Then the physics constrained loss is defined by forcing the predicted solution û ∈ Û to satisfy Ŝ given â ∈ Â. For example, LordNet (Shi et al., 2022) proposed the general form with the mean squared error as follows, L(θ) = Ea∼µ||Ŝ(fθ(â), â)||2, (2) In this paper, we mainly focus on time-dependent problems as follows, S(u,a)(t,x) = 0, (t,x) ∈ [0, T ]× Ω (3) The temporal dimension is discretized with the timestep ∆t and the neural solver solves the PDE in an auto-regressive way, ût+∆t = fθ(ût, â) (4) where ût is the corresponding discretized vector of the function u at time t. Figure 2 shows an example with a 4 × 4 rectangle mesh. Notice that similar to traditional numerical methods, the resolution of the finite-dimensional approximation in physics constrained loss, either in the spatial dimension or in the temporal dimension, must be sufficiently high, otherwise, the approximation error will be too large to guide the neural PDE solver. This leads to huge input and output as well as large neural networks to ensure expressiveness, whose inference would also be slow. 3.2 NEURALSTAGGER We propose a general methodology called NeuralStagger to gain further accelerations by exploiting the potential parallelism in the neural PDE solver. NeuralStagger decomposes the original learning task that maps ût to ût+∆t into several parallelizable subtasks in both spatial and temporal dimensions. The meshes of the subtasks spread evenly in the original field and stagger with each other. Then we can handle each subtask with a computationally cheap neural network. The decomposition strategy is introduced as follows. Spatial decomposition. The upper part of Figure 1 shows the 2-dimensional example with regular mesh. We first split the grid into patches of the size sH × sW and construct a subgrid by selecting only one point in each patch, resulting in sH ×sW subgrids evenly spread in the domain. We denote the functions in each sub-grid as ûi,jt and â i,j t where i and j represents the relative position of the sub-grid in horizontal and vertical directions. Then we use sH × sW neural networks to learn to predict the solution at t+∆t as follows, ûi,jt+∆t = fθi,j (û i,j t , â i,j), (5) where fθi,j is the neural network for the sub-grid at the position (i, j). The outputs û i,j t+∆t compose the solution at the original grid. Then the neural networks can be jointly trained with the physics constrained loss defined on the original grid. Notice that the neural networks are independent of each other and can be fully paralleled. As the input and output decrease by sH × sW times, the neural network can be much smaller and faster than the original one to be used for the neural solver. The decomposition rules can be extended to higher-dimensional cases. In addition, the learning tasks at the subgrids are quite close to each other, except for the difference in the boundary of the domain, so we share the parameters of the neural networks fθi,j to reduce redundancy and accelerate training. Meanwhile, because there are often tiny differences between the inputs of the subtasks, we encourage the neural network to distinguish them by adding positional information of each grid point as additional input channels. Temporal decomposition. We can treat the temporal dimension as a 1-dimensional grid with a fixed step ∆t. Thus we can also decompose the grid into sT sub-grids by selecting a point for every sT points, where instead of predicting ût+∆t, the neural network predicts ût+sT∆t, ût+sT∆t = fθ (ût, â) , (6) Given the solution sequence from t to t + (sT − 1)∆t denoted by ût,sT for simplicity, we can get the next sequence of the solution ût+sT∆t,sT . Then the physics constrained loss is defined on the sequence with timestep ∆t, as is shown in the lower part of Figure 1. Once the neural network is trained, we can generate the sequence ût+sT∆t,sT by running the neural network inference of Formula 6 with sT threads in parallel with inputs ût,sT . The non-auto-regressive process can generate the solution in sT time steps within one inference step, which can be much faster than the original version (Figure 2) with sT inference steps. Note that though we only need the initial condition for the coarsest-resolution test, we must prepare the first sT states with numerical solvers for training and the high-resolution test. However, this drawback is neglectful for long-time simulations. The spatial and temporal decompositions are orthogonal and can be used at the same time. We denote the joint decomposition operator as Ds, the transformation operator of the neural networks as FΘ and the reconstruction operator Es, where s represents all decomposition factors including sH , sW and sT , Θ represents all parameters of the neural network group. The physics constrained loss with the spatial-temporal decomposition can be written as, L(Θ) = Eût,sT ||Ŝ (Es (FΘ (Ds (ût,sT , â))) , ût,sT , â) || 2. (7) In addition, as the sub-grids spread evenly in the domain of the PDE, each of them can be seen as the down-sampled version of the original problem, where a local patch is reduced to the point at a fixed relative position in the patch. Therefore, the learned neural networks are naturally coarseresolution solvers to the PDE. Suppose (H,W, T ) is the tuple of the original height, width, and time span that the physics constrained loss is conducted on. Then the coarse-resolution solvers are conducted on the resolution ( HsH , W sW , TsT ). Meanwhile, we can infer multiple levels of resolutions ranging from that of coarse-resolution solvers to the original one, all of which can reach the same speed by parallelism. 3.3 CHOICE OF THE DECOMPOSITION FACTORS Obviously, the acceleration effect by NeuralStagger grows as we use larger sH , sW and sT . However, these decomposition factors cannot be arbitrarily large. We conclude two potential constraints, i.e., the increased complexity of the learning task and the information loss in the input. We would like to leverage the following 2-dimensional diffusion equation with the periodic boundary condition as an example to explain the two constraints, ∂u(x, y, t) ∂t = ∆u(x, y, t), x, y, t ∈ [0, 1], (8) u(x, y, 0) = f(x, y), x, y ∈ [0, 1], (9) where u is the density function of diffusing material, ∆ is the Laplacian operator and f is the function of the initial condition. We use the regular mesh with d points in total and leverage the central difference scheme with the spatial step ∆x and temporal step ∆t. Then the PDE is transformed into a matrix equation on the discretized solution at certain time t, denoted by ût ∈ Rd. Increased complexity of learning task. For the temporal dimension, we find that the larger decomposition factor might make the mapping from the input to the prediction more complex. For the linear diffusion equation, we can explicitly calculate the transfer matrix from ûi to ûi+∆t based on the matrix equation. Suppose the transfer matrix is Ti ∈ Rd×d. By iterative applying the transfer matrix, we can get the transformation from the initial condition û0 to the solution at any time step k as follows, ûk∆t = û0 k−1∏ 0 Ti. (10) For notational simplicity, we denote the resulting transfer matrix from û0 to ûk∆t as Tk. By certain arrangements, Tk is a band matrix where the non-zero values are centralized around the diagonal. The bandwidth indicates the sparsity of the matrix as well as how local the points in the mesh entangle with each other. We observe that the bandwidth grows linearly with regard to k. For example, Figure 3 shows the case of d = 642. When the k ≥ 60, the matrix is dense and every element in ûk∆t is a weighted summation of almost all the elements in ût. This indicates that increasing k may make the entanglements between the grid points more complex, leading to a harder learning task for the neural network. Information loss. By spatial decomposition, each subgrid only reserves a small part of the original grid. Obviously, it may introduce the problem of information loss if the dropped points are important for the prediction in the subtasks. Here we theoretically characterize the information loss caused by spatial decomposition under the linear model setting, i.e., f(ût) = ûtW ∗. Consider the diffusion equation and the corresponding matrix equation. With some abuse of notation, the superscript i denotes the index of training samples, such as ûit and the bold symbol without the superscript i denotes the matrix composed of all the samples, such as ût. With N training samples, the physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (11) where yi denotes the rest parts of the matrix equation. By applying spatial decomposition, the input and output are equally partitioned into K = sHsW subgrids {û1t , · · · , ûKt } and {û1t+1, · · · , ûKt+1}. Then according to the physics constrained loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (12) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. The next proposition shows a sufficient condition for equal prediction for Eq.(11) and Eq.(12). Proposition 1. If rank(ût) = rank(ûkt ), the model ûtW ∗ and ûktW ∗k will make the same prediction on yk. We put the proof in the appendix. In many physical scenarios, the local patches of size sHsW do not distribute arbitrarily in the ambient space RsHsW , but rather live in some low-dimensional manifold. Hence, there is much information redundancy in ût and with careful settings of sH and sW , the rank after the decomposition does not change much, indicating similar predictions on yk. With deep learning models fθ such as those we use in this paper, we believe that more complex local patterns can be resolved and the spatial factors can be set larger. 4 EXPERIMENTS To evaluate the acceleration effect and accuracy of the proposed method, we test three cases of fluid dynamics simulation governed by the Navier-Stokes equation. We first target two benchmark settings, i.e., the periodic boundary condition and the lid-driven cavity boundary condition (Zienkiewicz et al., 2006) In both settings, the initial condition changes, and the neural PDE solver learns to generalize to various initial conditions. Next, we test the more challenging case called flow around obstacles, where several obstacles are placed inside the flow. The neural PDE solver is trained to generalize to different obstacles as well as initial conditions. In addition, the state of the fluid changes quite a lot over time. To ensure the neural solver generalizes to various states, we must maintain a training pool to store states newly predicted during training. At last, we also evaluate the capability to the inverse problem, i.e., the optimal control on the flow-around-obstacles setting. In general, we consider the 2-dimensional incompressible Navier-Stokes equation as follows: ρ ( ∂v⃗ ∂t + (v⃗ · ∇)v⃗ ) = −∇p+ µ∆v⃗ + f⃗ (13) ∇ · v⃗ = 0 (14) where v⃗ is the fluid velocity field, p is the pressure field, µ is the viscosity, and f⃗ is the external force. In all experiments, we trained neural networks with Adam optimizer and decayed learning rates. The speed test is done on Nvidia A100 GPUs under the assumption that we have sufficient computational resources for each coarse-resolution solver. See Appendix Section 6.2 for more details. 4.1 PERIODIC AND LID-DRIVEN CAVITY BOUNDARY CONDITION We first test the Navier-Stokes equation with the periodic boundary condition and the lid-driven cavity boundary condition. In both cases, the physics constrained loss is obtained by discretizing the vorticity-stream equation with the central-difference scheme and the Crank-Nicolson method in the 64× 64 regular mesh. The time step ∆t is 1e− 2 and the viscosity ν is 1e− 3. We use the popular FNO (Li et al., 2020a) to test the accuracy and speed in different settings of decomposition factors. The ground truth is obtained by FDM. We evaluate the accuracy by auto-regressively running the inference of the neural solver across the target length along time LT and compare the terminal state with that from the ground truth. Note that we compare all the results on the original mesh and thus the spatially decomposed results reconstruct to the 64 × 64 resolution for evaluation. We measure with the relative error which is calculated by dividing the L2 norm of the error by the L2 norm of the ground truth. The measurement is denoted by Error-k where k is the number of time steps. Following the notations in Section 3.2, the decomposition factors along x dimension, z dimension and the temporal dimension are denoted by sW , sH and sT . In general, NeuralStagger achieves acceleration in both cases without losing much accuracy. As you can see in Figure 5, the coarseresolution solver is also accurate when applied alone without reconstruction. In the case of the periodic boundary condition, the target length along time LT equals 2, which is 200 time steps. The flow is driven by the external force f⃗ , which is introduced in the appendix. As you can see in Figure 4 (left), the relative errors of the learned neural solvers are lower than 0.2% in all settings of spatial and temporal decomposition factors. In terms of speed, with the most aggressive setting sT = 40, sH = sW = 2, and full parallelism, the inference time for the 200- time-steps simulation is 0.076 seconds on average. Compared to 0.36 seconds by the baseline without NeuralStagger, there is 47× speed-up. We can also observe some trends in accuracy with regard to the choice of spatial and temporal factors. Error1 grows like a linear function with the temporal factor sT in both spatial factor settings. The reason is that the learning task becomes more complex as we discuss in Section 3.3, and with the neural network unchanged, the accuracy drops. Meanwhile, the accumulated errors, i.e., Error200, almost keep at the same level. This is because the steps in the auto-regressive procedure reduce as sT grows, e.g., when sT = 40, the neural networks for subtasks only predict 200/40 = 5 steps ahead. The benefit perfectly neutralizes the detriment of the increased task complexity. In the case of the lid-driven cavity boundary condition, the fluid acts in a cavity consisting of three rigid walls with no-slip conditions and a lid moving with a steady tangential velocity 1. We set the length of time LT = 27, much larger than that with the periodic boundary, to see if the simulation converges to the right steady state. With larger LT , we try larger temporal skip factors such as sT = 108. As is shown in Figure 4 (right), the relative errors are all controlled below 0.5% even after 2700 time steps. Again, with the most aggressive setting sT = 108, sH = sW = 2 and full parallelism, the neural solver finishes the 2700-time-steps simulation within 0.038 seconds, about 119× faster than the baseline, i.e., 4.49 seconds. Different from the periodic boundary condition, the accuracy drops when we increase sT . The reason is that the increase of sT brings more detriments of task complexity than the benefits from the shorter auto-regressive sequence. 4.2 FLOW AROUND OBSTACLES In this section, we evaluate NeuralStagger in a larger and more complex setting called flow around obstacles. The setting is the same as that used in (Wandel et al., 2020), which is also our baseline. The fluid runs through a pipe, where we put different shapes of obstacles to affect the flow, including rotating cylinders and walls constructing a folded pipe. The external forces in Eq. 13 are neglected and set to 0. The neural solver is trained to generalize to different settings of the obstacles, including the shape and the velocity on the surface as well as the inflow/outflow velocities. Then we evaluate the neural solver in 5 randomly sampled configurations in both the cylinder case and the folded pipe case. You may refer to the appendix for more details. We leverage the same configurations as those in (Wandel et al., 2020) including the discretization method, the physics constrained loss, training strategies, the input features, the predicted variables as well as the evaluation metric. Specifically, the rectangular domain is discretized into a 100 × 300 regular mesh and ∆t = 4. The physics constrained loss is used as the evaluation metric, measuring to what extent the prediction at the next time step satisfies the PDE given the current fluid state and the boundary conditions. As the fields of the fluid change much over time, we maintain a training pool initialized with a set of initial conditions and incrementally enrich it as the training goes. This is achieved because the predictions from the neural network can be seen as new data if the neural network has been well fitted in the current pool. One can refer to (Wandel et al., 2020) for more details. Wandel et al. (2020) leverages U-net as the neural solver, but to demonstrate the full potential of NeuralStagger, we also try the other two neural network architectures, i.e., FNO and LordNet (Shi et al., 2022) which also leverages the physics constrained loss to train the neural PDE solver. We directly use the trained U-net from the official open-source repository of (Wandel et al., 2020) for evaluation and train FNO and LordNet from scratch. The experiments in Table 1 show that LordNet outperforms the other two neural networks in the baseline setting without NeuralStagger. Therefore, we use LordNet for further experiments on the choice of spatial and temporal factors. We find that in this case, the information from the 100 × 100 grid (sH = 1, sW = 3) is sufficient to achieve comparable results to the U-net baseline, while larger spatial steps will introduce too much information loss. In addition, it seems increasing the temporal factors hurts the accuracy more obviously than those in the periodic boundary condition and the lid-driven boundary condition, though the accuracy is still comparable to U-net even with sT = 16. We believe this is because the dataset is incrementally explored by maintaining a training pool and enriching it with the neural network’s predictions during training. However, the predictions may not be accurate. As the physics constrained loss is defined on ût+(sT−1)∆t and ût+sT∆t, inaccurate ût+(sT−1)∆t may mislead the neural network to the wrong direction. When we increase sT , more errors will be accumulated along the sequence from ût the ût+(sT−1)∆t and the training will be harder. Designing training algorithms to better support NeuralStagger remains unexplored and we leave it for future work. In terms of speed, the choices of spatial and temporal factors lead to different levels of acceleration, as is shown in Table 1, where GMACs (multiply-accumulate Operations) per card is the average computational load of simulation for 16 timesteps. Specifically, the largest factor configuration to keep the accuracy comparable to the baseline is sT = 16, sH = 1, sW = 3, leading to the largest decrease in GMACs per card, i.e., 1/32 of the baseline U-net and 1/48 of LordNet without NeuralStagger. Specifically, when tested with A100 cards, it leads to 28× speed-up over U-net and 17× over LordNet without NeuralStagger. 4.3 APPLICATION IN OPTIMAL CONTROL To further showcase the capability of the neural solver with NeuralStagger on the inverse problem, we conduct the optimal control experiment introduced in Wandel et al. (2020). The task is to change the flow speed to control the shedding frequency of a Kármán vortex street behind an obstacle. The shedding frequency is estimated by the frequency spectrum V (f) of the y-component of the velocity field behind the obstacle over 200 time steps, denoted by E [ |V (f)|2 ] . We define the loss function L = ( E [ |V (f)|2 ] − f̂ )2 , where f̂ is the target frequency. Then we compute the gradient of the velocity with regard to the loss by auto-differentiation through the neural solver and leverage Adam optimizer (Paszke et al., 2017; Kingma & Ba, 2014) to update the velocity. We compare the result of the learned model with the setting sH = 1, sW = 3, sT = 2 to that shown in Wandel et al. (2020). As is shown in Figure 6, the velocity controlled by LordNet converges to the target velocity with fewer iterations. 5 CONCLUSION AND LIMITATION We present NeuralStagger, a general framework for accelerating the neural PDE solver trained by physics constrained loss. By spatially and temporally decomposing the learning task and training multiple lightweight neural networks, the neural solver is better paralleled and much faster with sufficient computational resources. In addition, each lightweight neural network is naturally a coarseresolution solver and they bring the flexibility of producing the solutions on multiple levels of resolution, which is important for balancing the resolution and computational resources. We discuss the choice of decomposition factors and empirically test their influence on accuracy and speed. The experiments in fluid dynamics simulation show that NeuralStagger brings an additional 10 to 100× speed-up over SOTA neural PDE solvers with mild sacrifice on accuracy. There are also several limitations to be tackled in future works. Firstly, the accuracy drops with the growing decomposition factors. A potential solution would be introducing historical states in the neural network input to make up for the information loss. Secondly, we only define the spatial decomposition over regular meshes, while it turns to the non-trivial vertex coloring problem for irregular meshes. Heuristic coloring algorithms would be useful for this problem. Thirdly, our experiments only show the generalization to different initial conditions and boundary conditions. In the future, we would like to explore the generalization to different mesh sizes. 6 APPENDIX 6.1 INFORMATION LOSS CAUSED BY SPATIAL DECOMPOSITION In this section, we provide the proof to proposition 1 in the linear model setting. In this section, we will theoretically characterize the information loss caused by spatial decomposition under the linear model setting. Note that the proof is done on the 1-dimensional diffusion equation with the explicit method for ease of understanding, but as we will see, the conclusion is the same in the case with 2 dimensions or the implicit method. We consider a simple 1d partial differential equation with Dirichlet boundary condition: ∂tu = ∆u, x ∈ Ω (15) ut(x) = ft(x), x ∈ ∂Ω (16) Discretizing the function u on grid (x1, · · · , xd), we denote ûj = u(xj). We consider the finite difference discretization: ûjt+1 − û j t δt = (ûj+1t − û j t )− (û j t − û j−1 t ) δx2 , xj ̸= {x1, xd} (17) ûjt+1 = ft+1(xj), xj = {x1, xd} (18) Given the input ût ∈ Rd and output ût+∆t ∈ Rd, the output ût+∆t is parameterized by linear model as ût+∆t = ûtW where W ∈ Rd×d denotes the learned parameters. The physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (19) where i denotes the index of training samples and yj = ft+1(xj), xj = {x1, xd}; yj = ûjt − δt δx2 ( (ûj+1t − û j t )− (û j t − û j−1 t ) ) , xj ̸= {x1, xd}. By applying spatial decomposition, the input and output are equally partitioned into K blocks {û1t , · · · , ûKt } and {û1t+∆t, · · · , ûKt+∆t}. Each block contains d/K coordinates Then according to the MSR loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (20) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. Proof:We first consider the case that ∑N i=1(û i,k t ) τ ûi,kt is full rank. The minimizer of Eq.(20) is W ∗k = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1( ∑N i=1(û i,k t ) τyi,k). We denote the matrix A = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1, We construct a d×dmatrixB by lettingB(k+id/K, k+jd/K) = A(i, j), for i = 0, · · · , d/K; j = 0, · · · , d/K; otherwise, B(i, j) = 0. Then it is easy to check that the matrixB is the pseudo-inverse of ∑N i=1(û i t) τ ûit. The minimizer of Eq.(19) is (Bartlett et al., 2020)B( ∑N i=1(û t i) τyi). As the matrix B only has non-zero values on the coordinates that correspond to the k-th block, we have the k-the block of W ∗ equals W ∗k and other blocks equal zero matrices. Denoting the matrix composed of all the samples with the bold symbol without the superscript i such as ût for { ûit } and ûkt for { ûi,kt } , we have ∑N i=1(û i,k t ) τ ûi,kt = (û k t ) τ ûkt and ∑N i=1(û i t) τ ûit = (ût) τ ût. By Rank–nullity theorem, it is easy to see that rank((ût)τ ût) = rank(ût) and rank((ûkt ) τ ûkt ) = rank(û k t ). Then we get the results in the proposition. For the case that ∑N i=1(û t,k i ) τ ût,ki ≤ d/K, we can select its maximal linearly independent group to obtain its pseudo-inverse and apply similar analyses to get the results. In the case of the implicit method, the term ûitW in the physics constrained loss becomes û i tWV where V is an invertible matrix. This also does not change the conclusion. 6.2 IMPLEMENTATION DETAILS We implemented FNO with the original 2-dimensional version in the official repository, where we set the truncation mode to 12 and the width to 64. For the LordNet, we only stack 2 Lord modules and fix the channel count to 64 in all layers. In the position-wise embedding of the 2 Lord modules, we stack two 1×1 Convolutional layers, where the hidden embedding contains 256 and 128 channels separately, and GELU activation is used between the Convolutional layers. The implementation of Unet is based on the U-Net architecture (Ronneberger et al., 2015) with 20 hidden channels, which is consistent with that in (Wandel et al., 2020) The learning rates and training samples are described as follows. To keep out the potential influence of computational resources like cores and memory, we test the speed of NeuralStagger under the setting that each coarse-resolution solvers have sufficient resources to use. Therefore, we run each solver on Nvidia A100 GPUs with the batch size equals to 1. The time per step shown in Table 1 is calculated by dividing the inference time of the coarseresolution solver by the temporal factor sT . The time of decomposition and reconstruction is ignored because the operation supported by ‘pixel shuffle’ is super efficient. We also calculated GMACs (multiply-accumulate Operations) per card, which is the average computational load of simulation for 16 timesteps. Note that for the GMACs of FNO, we do not include the operation of Fourier transform. Periodic Boundary Condition We generate the data with random fields to generate a periodic function on a 64×64 grid with a time-step of 1e-2 where we record the solution every time step, where the external force is fixed f(x) = 0.1sin(2π(x + y)) + cos(2π(x + y)). For the perioidc boundary and lid-driven boundary conditions, we use the vorticity-stream function form of Eq. 13 as the physics-constrained loss. With the Helmholtz decomposition to Eq. 13, we rewrite the NavierStokes equation: ∂ω ∂t = ∂ψ ∂y ∂ω ∂x + ∂ψ ∂x ∂ω ∂y + 1 Re ( ∂2ω ∂x2 + ∂2ω ∂y2 ) (21) ∂2ψ ∂x2 + ∂2ψ ∂y2 = −ω, (22) where ω is the vorticity function, ψ is the stream function, and Re is the Reynolds number. The initial condition ω0 is generated by random field satisfying the distribution N ( 0, 83(−∆+ 64I)−4.0 ) . We use 6000 states for training. In this case, we use FNO to test NeuralStagger and decay the initial learning rate 3e-3 with a factor of 0.9 every 5000 iterations. Lid-driven Cavity boundary condition We generate data on a 64×64 but we train the neural network to predict the values of ψ inside the boundary, which is a 2-dimensional matrix of the shape (H − 2) × (W − 2). The random initial conditions are generated in the same way as the periodic boundary conditions. To make the initial state consistent with the boundary condition, we solve with the numerical solver for the first T0 = 1.98 and use ωT0 as the initial state. We use 8000 states for training with FNO, and decay the initial learning rate 3e-3 with a factor of 0.9 every 10000 iterations. Flow around Obstacles The data generation is the same as the setting used in (Wandel et al., 2020), where the resolution of the domain is 100×300, ∆t = 4, ρ = 4, µ = 0.1. In training, different types of environments are used including magnus, box, pipe, and wing. The locations and the velocity are variable during the training, e.g., the velocity is ranged from 0.0 to 1 m/s, the diameter of the obstacle is ranged from 10 to 40, and the coordinate x of the location is randomly from to 65 to 75 and the coordinate y of that is from 40 to 60. And then for the test, we randomly select the location and flow velocity to test and in our experiment, the Reynolds number of tests is 517. In this case, we train the model from scratch without any data for sT = 1. For sT > 1, we use the benchmark to pre-generate the initial sequence û0,sT for training. The learning rate is 1e-3 for Lordnet and 3e-3 for FNO, both with a factor of 0.9 every 5000 iterations. The quantitative comparison in this paper is conducted on a 100×300 grid. For the optimal control of vortex shedding experiment, the domain size is 100x300, and used the trained neural PDE solver based on the above training settings. The Reynolds number here is 880. The optimizer for both Unet and LordNet is Adam optimizer with a learning rate of 1e-3. 6.3 THE RESULTS OF THREE CASES WITH DIFFERENT SPATIAL-TEMPORAL FACTORS
1. What is the focus and contribution of the paper on NeuralStagger? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and applicability to general grid-based simulations? 3. What are the weaknesses of the paper, especially regarding experiment evaluation and scalability? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns about the method's ability to generalize to problems with different domain geometries?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces NeuralStagger, a method to divide the spatial and temporal resolution into staggered coarser grids, and use a neural model to evolve the coarse-grained grids (which also takes the relative location of the coarser grid as input). In inference, the different grids are combined together reconstruct the original resolution as needed. The paper demonstrates in experiments that it has improved speed while sacrificing some accuracy compared to the full-resolution model. And the paper also show improved convergence time for one optimal control experiment. The paper also shows some guideline for how to choose the hyperparameter to balance the tradeoff. Strengths And Weaknesses Strength: The paper is well written. The method is simple, and can be applied to general grid-based simulations. Weaknesses: Experiment evaluation: The experiment evaluation is a little weak. (1) It is not clear how scalable the method is to problems with enough difficulty It is not clear what is the Reynolds number of the flow through the cylinder problem is, and whether the Reynolds number is high enough to have vortex shedding (which is more difficult than laminar flow). (2) It is not clear how the method is able to generalize to larger-scale problems. The flow through the cylinder is a relatively easy task. It would be great to demonstrate it in a larger-scale (potentially 3D) task, where neuralStagger is incorporated with a neural-based surrogate model. This is not necessary, but if NeuralStagger can scale to larger problems, it would strengthen the paper a lot. (3) Why is in 4.1, only use the FNO to test, while in 4.2, only use the LordNet to test NeuralStagger? At least in section 4.2, would be great to have NeuralStagger combined with all of them (FNO and UNet), to show the generality of the method. One potential weakness of the method is it may not generalize to the test dataset with a different domain geometry. Since in the training time, the relative position of the staggered grid is passed in as input, it breaks translational symmetry, and the model may learn the knowledge of how the relative position affect the evolution in this specific geometry. If in the test time, the geometry is different, such knowledge may not hold. From the current paper, it seems that the test geometry is the same as training time. It would be great to perform experiments showing what would happen if test geometry is different. For example, if the width and height of test grid is much larger, or if the cylinder location is different. If it shows certain generalization it will be great. If not show generalization that is fine, but state it clearly as limitation of the method. It would be nice to state clearly the limitation of the work, e.g. the method in its current form only works for grid-based field (not irregular mesh) Clarity, Quality, Novelty And Reproducibility The main evaluation are in the Strength And Weaknesses session. In summary: Clarity: very good. Quality: reasonable, but if the weaknesses can be addressed, would strengthen the paper a lot. Novelty: seems to be novel (at least in the deep learning-based surrogate models) Reproducibility: reasonable, but some details are not provided.
ICLR
Title NeuralStagger: accelerating physics constrained neural PDE solver with spatial-temporal decomposition Abstract Neural networks have shown great potential in accelerating the solution of partial differential equations (PDEs). Recently, there has been a growing interest in introducing physics constraints into training neural PDE solvers to reduce the use of costly data and improve the generalization ability. However, these physics constraints, based on certain finite dimensional approximation over the function space, must resolve the smallest scaled physics to ensure the accuracy and stability of the simulation, resulting in heavy computational costs from large input, output, and neural networks. This paper proposes a general acceleration methodology called NeuralStagger by spatially and temporally decomposing the original learning tasks into several coarser-resolution subtasks. We define a coarse-resolution neural solver for each subtask, which requires fewer computational resources, and jointly train them with the vanilla physics constrained loss by simply arranging their outputs to reconstruct the original solution. Due to the perfect parallelism between them, the solution is achieved as fast as a coarse-resolution neural solver. In addition, the trained solvers bring the flexibility for users to simulate with multiple levels of resolution. We demonstrate the successful application of NeuralStagger on various fluid dynamics simulations, which leads to an additional 10 to 100 times speed-up. Moreover, the experiment also shows that the learned model could be well used for optimal control. 1 INTRODUCTION Partial differential equations (PDEs) are the critical parts of scientific research, describing vast categories of physical and chemical phenomena, e.g. sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, and so on. In the era of artificial intelligence, neural PDE solvers, in some works called neural operators, are widely studied as a promising technology to solve PDEs (Guo et al., 2016; Zhu & Zabaras, 2018; Hsieh et al., 2019; Bhatnagar et al., 2019; Bar-Sinai et al., 2019; Berner et al., 2020; Li et al., 2020b;a; Um et al., 2020; Pfaff et al., 2020; Lu et al., 2021b; Wang et al., 2021; Kochkov et al., 2021). Once the neural solver is trained, it can solve unseen PDEs with only an inference step, multiple magnitudes faster than that with traditional numerical solvers. Recently, several works have introduced physics constraints in training the neural PDE solvers in order to reduce the use of costly data and improve the generalization ability. They define the physics constrained loss with certain finite dimensional approximations to transform the PDEs into algebraic equations, which are further used to define the loss function (Zhu et al., 2019; Geneva & Zabaras, 2020; Wandel et al., 2020; Shi et al., 2022). However, to ensure stability and accuracy, they must define the loss in a relatively high resolution to resolve the smallest-scale physics in the PDE, resulting in huge input and output as well as increased neural network size. The solution by the neural network inference might still be slow, but it seems impossible to get further accelerations as the bottleneck comes from the input and output complexity. In this paper, we propose a simple methodology called NeuralStagger to jump out of the dilemma. The basic idea is to evenly decompose the original physical fields into several coarser-resolution fields. Then we jointly train a lightweight neural network to predict the solution in each coarseresolution field respectively, which can be naturally a coarse-resolution neural solver to the original PDE. We design the decomposition rules so that the outputs of these lightweight networks can re- construct the solutions in the original field with simple arrangements. For ease of reading, here and also in most parts of the paper, we illustrate the decomposition methodology in the 2-dimensional example with regular mesh and finite difference approximation. Figure 1 (top) shows the physical field in a 4 × 4 mesh is decomposed into 4 coarser-resolution fields, each of which is handled by a small neural network. We could also do similar things along the temporal dimension, as is shown in Figure 1 (bottom). The group of coarse-resolution solvers as well as the decomposition and reconstruction operations can be seen as an end-to-end neural PDE solver, which can be trained with the physics constrained loss that resolves small-scale physics in a sufficiently high resolution. Because the neural networks can run in parallel, the original simulation is achieved as fast as a coarse-resolution neural solver. In addition, the trained neural networks can predict the PDE’s solution in various levels of resolution, ranging from the resolution of the individual coarse-resolution solver to the resolution of the physics constrained loss by the combination of all these solvers. We believe that such flexibility is vital in balancing the computational resources and the resolution. We demonstrate the effectiveness of the NeuralStagger in the Navier-Stokes equation with three parametric settings, e.g., periodic boundary conditions with varied initial conditions, lid-driven cavity boundary conditions with varied initial conditions, and the flow around the obstacle with varied obstacles and initial conditions. We find that with NeuralStagger, the learned networks can conduct accurate and stable simulation with 10∼100 times speed-up over SOTA neural PDE solvers. In addition, we demonstrate that they can accurately tackle the optimal control task with autodifferentiation. Our contributions can be summarized in three parts: • We propose a general methodology called NeuralStagger to accelerate neural PDE solving by spatially and temporally decomposing the learning task and running a group of coarseresolution solvers in parallelism. • The learned network group can provide solutions in multiple resolutions from the coarsest one by a single network to the original resolution, which provides the flexibility to balance the computational resources and the resolution. • Empirically, we demonstrate that the methodology leads to 10 to 100 times speed-up over SOTA neural PDE solvers as well as the efficient solution on optimal control. In the following sections, we first briefly summarize the related works in Section 2 and then introduce the preliminaries and the proposed NeuralStagger in Section 3. To showcase the efficiency and accuracy of the proposed method, we present the settings of the experiment and results in Section 4. Finally, we conclude and discuss the future work in Section 5. 2 RELATED WORK In general, two mainstream approaches have been widely used for solving PDEs. The first is to approximate the PDE’s solution function with neural networks (Raissi et al., 2019; 2020; Jin et al., 2021). They have proved to be successful in tackling high-dimensional problems and inverse problems. The second is to learn a PDE solver to solve parametric PDEs. The neural PDE solver can learn the solutions of a class of PDEs, and thus can generalize to PDEs with different parameters. Our work is mainly about the accelerating the second type. Many impressive works have been done to improve the neural solver for parametric PDEs in terms of neural network design, e.g., convolutional neural network (Guo et al., 2016; Tompson et al., 2017; Bhatnagar et al., 2019), graph neural networks (Pfaff et al., 2020), the multipole graph kernel (Li et al., 2020b), Fourier neural operators (Li et al., 2020a; Guibas et al., 2021), the message passing neural network (Brandstetter et al., 2022b), deepOnet (Lu et al., 2021a), Clifford neural networks (Brandstetter et al., 2022a) and so on. After being trained with pre-generated simulated data and labels, they can solve the PDE several magnitudes faster than conventional numerical solvers with competitive accuracy. Recently there are raising concerns about the cost of collecting training data and the generalization ability, so several works have introduced the physics constrained loss for training. For example, (Wang et al., 2021) combined the DeepOnet with a physics-informed way to improve the sample efficiency. Zhu et al. (2019) proposed physics constrained loss for high-dimensional surrogate modeling and (Geneva & Zabaras, 2020) introduced the use of a physics constrained framework to achieve the data-free training in the case of Burgers equations. Wandel et al. (2020; 2021) proposed the physics constrained loss based on the certain approximation of the Navier-Stokes equation to solve fluidlike flow problems. Shi et al. (2022) proposed a general physics constrained loss called mean square residual (MSR) loss as well as a neural network called LordNet for better performance. However, the physics constrained loss by certain approximations require the approximation to be sufficiently close to the continuous version, resulting in a relatively high-resolution discretization. Thus in complex and large-scale problems, the neural solver must be large enough for expressiveness and its inference would still be slow. Although some works (Wang et al., 2021) directly calculate the derivatives via back-propagation through the neural network, they are known to have similar training problems as PINN, e.g., converging to trivial solutions. Interestingly in the case of regular mesh, the proposed spatial decomposition is the same in the implementation as ‘pixel shuffle’ from computer vision. There are a huge number of works in this direction, but the most related one might be (Ren et al., 2022) which leverages pixel shuffle and physics constrained loss in the super-resolution task. However, we are fundamentally different in target and solution. For example, we train multiple solvers to work in full parallelism and obtain the solution in multiple levels of resolution without training them again. We also find similar treatment on meshes in classical numerical methods, e.g., staggered-mesh and leap-frog integration. However, they are also fundamentally different in target and implementation. The numerical methods often place meshes of multiple fields with offsets to get more accurate approximation while NeuralStagger splits the mesh of every single field into multiple sub-meshes for defining the independent subtasks. In addition, they are orthogonal to NeuralStagger, i.e., one can leverage both the staggered-mesh to define the physics constrained loss and NeuralStagger to train multiple coarse-resolution solvers at the same time, as is done in our experiment. 3 METHODOLOGY 3.1 PRELIMINARIES Consider a connected domain Ω ⊆ Rn with boundary ∂Ω, and let (A,U ,V) be separable Banach spaces. Then the parametric PDEs can be defined as the form S(u,a)(x) = 0, x ∈ Ω (1) where S : U × A → V is a linear or nonlinear differential operator, a ∈ A denotes the parameters under certain distribution µ, such as coefficient functions or boundary/initial conditions, and u ∈ U is the corresponding unknown solution function. Further, we can define the solution operator of the parametric PDE G : A → U , which maps two infinite-dimensional function spaces. A main branch of works in neural PDE solvers approximate the solution operator by discretizing the functions into finite dimensional spaces denoted by  and Û and learning the mapping fθ :  → Û . Correspondingly, we have the discretized version of the PDE’s operator S by certain finite-dimensional approximations such as the finite difference method (FDM) and finite element method (FEM), which is denoted by Ŝ. We denote the vector of the function values in a mesh with the hat symbol, e.g., â is the vector of the PDE’s parameter a ∼ µ. Then the physics constrained loss is defined by forcing the predicted solution û ∈ Û to satisfy Ŝ given â ∈ Â. For example, LordNet (Shi et al., 2022) proposed the general form with the mean squared error as follows, L(θ) = Ea∼µ||Ŝ(fθ(â), â)||2, (2) In this paper, we mainly focus on time-dependent problems as follows, S(u,a)(t,x) = 0, (t,x) ∈ [0, T ]× Ω (3) The temporal dimension is discretized with the timestep ∆t and the neural solver solves the PDE in an auto-regressive way, ût+∆t = fθ(ût, â) (4) where ût is the corresponding discretized vector of the function u at time t. Figure 2 shows an example with a 4 × 4 rectangle mesh. Notice that similar to traditional numerical methods, the resolution of the finite-dimensional approximation in physics constrained loss, either in the spatial dimension or in the temporal dimension, must be sufficiently high, otherwise, the approximation error will be too large to guide the neural PDE solver. This leads to huge input and output as well as large neural networks to ensure expressiveness, whose inference would also be slow. 3.2 NEURALSTAGGER We propose a general methodology called NeuralStagger to gain further accelerations by exploiting the potential parallelism in the neural PDE solver. NeuralStagger decomposes the original learning task that maps ût to ût+∆t into several parallelizable subtasks in both spatial and temporal dimensions. The meshes of the subtasks spread evenly in the original field and stagger with each other. Then we can handle each subtask with a computationally cheap neural network. The decomposition strategy is introduced as follows. Spatial decomposition. The upper part of Figure 1 shows the 2-dimensional example with regular mesh. We first split the grid into patches of the size sH × sW and construct a subgrid by selecting only one point in each patch, resulting in sH ×sW subgrids evenly spread in the domain. We denote the functions in each sub-grid as ûi,jt and â i,j t where i and j represents the relative position of the sub-grid in horizontal and vertical directions. Then we use sH × sW neural networks to learn to predict the solution at t+∆t as follows, ûi,jt+∆t = fθi,j (û i,j t , â i,j), (5) where fθi,j is the neural network for the sub-grid at the position (i, j). The outputs û i,j t+∆t compose the solution at the original grid. Then the neural networks can be jointly trained with the physics constrained loss defined on the original grid. Notice that the neural networks are independent of each other and can be fully paralleled. As the input and output decrease by sH × sW times, the neural network can be much smaller and faster than the original one to be used for the neural solver. The decomposition rules can be extended to higher-dimensional cases. In addition, the learning tasks at the subgrids are quite close to each other, except for the difference in the boundary of the domain, so we share the parameters of the neural networks fθi,j to reduce redundancy and accelerate training. Meanwhile, because there are often tiny differences between the inputs of the subtasks, we encourage the neural network to distinguish them by adding positional information of each grid point as additional input channels. Temporal decomposition. We can treat the temporal dimension as a 1-dimensional grid with a fixed step ∆t. Thus we can also decompose the grid into sT sub-grids by selecting a point for every sT points, where instead of predicting ût+∆t, the neural network predicts ût+sT∆t, ût+sT∆t = fθ (ût, â) , (6) Given the solution sequence from t to t + (sT − 1)∆t denoted by ût,sT for simplicity, we can get the next sequence of the solution ût+sT∆t,sT . Then the physics constrained loss is defined on the sequence with timestep ∆t, as is shown in the lower part of Figure 1. Once the neural network is trained, we can generate the sequence ût+sT∆t,sT by running the neural network inference of Formula 6 with sT threads in parallel with inputs ût,sT . The non-auto-regressive process can generate the solution in sT time steps within one inference step, which can be much faster than the original version (Figure 2) with sT inference steps. Note that though we only need the initial condition for the coarsest-resolution test, we must prepare the first sT states with numerical solvers for training and the high-resolution test. However, this drawback is neglectful for long-time simulations. The spatial and temporal decompositions are orthogonal and can be used at the same time. We denote the joint decomposition operator as Ds, the transformation operator of the neural networks as FΘ and the reconstruction operator Es, where s represents all decomposition factors including sH , sW and sT , Θ represents all parameters of the neural network group. The physics constrained loss with the spatial-temporal decomposition can be written as, L(Θ) = Eût,sT ||Ŝ (Es (FΘ (Ds (ût,sT , â))) , ût,sT , â) || 2. (7) In addition, as the sub-grids spread evenly in the domain of the PDE, each of them can be seen as the down-sampled version of the original problem, where a local patch is reduced to the point at a fixed relative position in the patch. Therefore, the learned neural networks are naturally coarseresolution solvers to the PDE. Suppose (H,W, T ) is the tuple of the original height, width, and time span that the physics constrained loss is conducted on. Then the coarse-resolution solvers are conducted on the resolution ( HsH , W sW , TsT ). Meanwhile, we can infer multiple levels of resolutions ranging from that of coarse-resolution solvers to the original one, all of which can reach the same speed by parallelism. 3.3 CHOICE OF THE DECOMPOSITION FACTORS Obviously, the acceleration effect by NeuralStagger grows as we use larger sH , sW and sT . However, these decomposition factors cannot be arbitrarily large. We conclude two potential constraints, i.e., the increased complexity of the learning task and the information loss in the input. We would like to leverage the following 2-dimensional diffusion equation with the periodic boundary condition as an example to explain the two constraints, ∂u(x, y, t) ∂t = ∆u(x, y, t), x, y, t ∈ [0, 1], (8) u(x, y, 0) = f(x, y), x, y ∈ [0, 1], (9) where u is the density function of diffusing material, ∆ is the Laplacian operator and f is the function of the initial condition. We use the regular mesh with d points in total and leverage the central difference scheme with the spatial step ∆x and temporal step ∆t. Then the PDE is transformed into a matrix equation on the discretized solution at certain time t, denoted by ût ∈ Rd. Increased complexity of learning task. For the temporal dimension, we find that the larger decomposition factor might make the mapping from the input to the prediction more complex. For the linear diffusion equation, we can explicitly calculate the transfer matrix from ûi to ûi+∆t based on the matrix equation. Suppose the transfer matrix is Ti ∈ Rd×d. By iterative applying the transfer matrix, we can get the transformation from the initial condition û0 to the solution at any time step k as follows, ûk∆t = û0 k−1∏ 0 Ti. (10) For notational simplicity, we denote the resulting transfer matrix from û0 to ûk∆t as Tk. By certain arrangements, Tk is a band matrix where the non-zero values are centralized around the diagonal. The bandwidth indicates the sparsity of the matrix as well as how local the points in the mesh entangle with each other. We observe that the bandwidth grows linearly with regard to k. For example, Figure 3 shows the case of d = 642. When the k ≥ 60, the matrix is dense and every element in ûk∆t is a weighted summation of almost all the elements in ût. This indicates that increasing k may make the entanglements between the grid points more complex, leading to a harder learning task for the neural network. Information loss. By spatial decomposition, each subgrid only reserves a small part of the original grid. Obviously, it may introduce the problem of information loss if the dropped points are important for the prediction in the subtasks. Here we theoretically characterize the information loss caused by spatial decomposition under the linear model setting, i.e., f(ût) = ûtW ∗. Consider the diffusion equation and the corresponding matrix equation. With some abuse of notation, the superscript i denotes the index of training samples, such as ûit and the bold symbol without the superscript i denotes the matrix composed of all the samples, such as ût. With N training samples, the physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (11) where yi denotes the rest parts of the matrix equation. By applying spatial decomposition, the input and output are equally partitioned into K = sHsW subgrids {û1t , · · · , ûKt } and {û1t+1, · · · , ûKt+1}. Then according to the physics constrained loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (12) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. The next proposition shows a sufficient condition for equal prediction for Eq.(11) and Eq.(12). Proposition 1. If rank(ût) = rank(ûkt ), the model ûtW ∗ and ûktW ∗k will make the same prediction on yk. We put the proof in the appendix. In many physical scenarios, the local patches of size sHsW do not distribute arbitrarily in the ambient space RsHsW , but rather live in some low-dimensional manifold. Hence, there is much information redundancy in ût and with careful settings of sH and sW , the rank after the decomposition does not change much, indicating similar predictions on yk. With deep learning models fθ such as those we use in this paper, we believe that more complex local patterns can be resolved and the spatial factors can be set larger. 4 EXPERIMENTS To evaluate the acceleration effect and accuracy of the proposed method, we test three cases of fluid dynamics simulation governed by the Navier-Stokes equation. We first target two benchmark settings, i.e., the periodic boundary condition and the lid-driven cavity boundary condition (Zienkiewicz et al., 2006) In both settings, the initial condition changes, and the neural PDE solver learns to generalize to various initial conditions. Next, we test the more challenging case called flow around obstacles, where several obstacles are placed inside the flow. The neural PDE solver is trained to generalize to different obstacles as well as initial conditions. In addition, the state of the fluid changes quite a lot over time. To ensure the neural solver generalizes to various states, we must maintain a training pool to store states newly predicted during training. At last, we also evaluate the capability to the inverse problem, i.e., the optimal control on the flow-around-obstacles setting. In general, we consider the 2-dimensional incompressible Navier-Stokes equation as follows: ρ ( ∂v⃗ ∂t + (v⃗ · ∇)v⃗ ) = −∇p+ µ∆v⃗ + f⃗ (13) ∇ · v⃗ = 0 (14) where v⃗ is the fluid velocity field, p is the pressure field, µ is the viscosity, and f⃗ is the external force. In all experiments, we trained neural networks with Adam optimizer and decayed learning rates. The speed test is done on Nvidia A100 GPUs under the assumption that we have sufficient computational resources for each coarse-resolution solver. See Appendix Section 6.2 for more details. 4.1 PERIODIC AND LID-DRIVEN CAVITY BOUNDARY CONDITION We first test the Navier-Stokes equation with the periodic boundary condition and the lid-driven cavity boundary condition. In both cases, the physics constrained loss is obtained by discretizing the vorticity-stream equation with the central-difference scheme and the Crank-Nicolson method in the 64× 64 regular mesh. The time step ∆t is 1e− 2 and the viscosity ν is 1e− 3. We use the popular FNO (Li et al., 2020a) to test the accuracy and speed in different settings of decomposition factors. The ground truth is obtained by FDM. We evaluate the accuracy by auto-regressively running the inference of the neural solver across the target length along time LT and compare the terminal state with that from the ground truth. Note that we compare all the results on the original mesh and thus the spatially decomposed results reconstruct to the 64 × 64 resolution for evaluation. We measure with the relative error which is calculated by dividing the L2 norm of the error by the L2 norm of the ground truth. The measurement is denoted by Error-k where k is the number of time steps. Following the notations in Section 3.2, the decomposition factors along x dimension, z dimension and the temporal dimension are denoted by sW , sH and sT . In general, NeuralStagger achieves acceleration in both cases without losing much accuracy. As you can see in Figure 5, the coarseresolution solver is also accurate when applied alone without reconstruction. In the case of the periodic boundary condition, the target length along time LT equals 2, which is 200 time steps. The flow is driven by the external force f⃗ , which is introduced in the appendix. As you can see in Figure 4 (left), the relative errors of the learned neural solvers are lower than 0.2% in all settings of spatial and temporal decomposition factors. In terms of speed, with the most aggressive setting sT = 40, sH = sW = 2, and full parallelism, the inference time for the 200- time-steps simulation is 0.076 seconds on average. Compared to 0.36 seconds by the baseline without NeuralStagger, there is 47× speed-up. We can also observe some trends in accuracy with regard to the choice of spatial and temporal factors. Error1 grows like a linear function with the temporal factor sT in both spatial factor settings. The reason is that the learning task becomes more complex as we discuss in Section 3.3, and with the neural network unchanged, the accuracy drops. Meanwhile, the accumulated errors, i.e., Error200, almost keep at the same level. This is because the steps in the auto-regressive procedure reduce as sT grows, e.g., when sT = 40, the neural networks for subtasks only predict 200/40 = 5 steps ahead. The benefit perfectly neutralizes the detriment of the increased task complexity. In the case of the lid-driven cavity boundary condition, the fluid acts in a cavity consisting of three rigid walls with no-slip conditions and a lid moving with a steady tangential velocity 1. We set the length of time LT = 27, much larger than that with the periodic boundary, to see if the simulation converges to the right steady state. With larger LT , we try larger temporal skip factors such as sT = 108. As is shown in Figure 4 (right), the relative errors are all controlled below 0.5% even after 2700 time steps. Again, with the most aggressive setting sT = 108, sH = sW = 2 and full parallelism, the neural solver finishes the 2700-time-steps simulation within 0.038 seconds, about 119× faster than the baseline, i.e., 4.49 seconds. Different from the periodic boundary condition, the accuracy drops when we increase sT . The reason is that the increase of sT brings more detriments of task complexity than the benefits from the shorter auto-regressive sequence. 4.2 FLOW AROUND OBSTACLES In this section, we evaluate NeuralStagger in a larger and more complex setting called flow around obstacles. The setting is the same as that used in (Wandel et al., 2020), which is also our baseline. The fluid runs through a pipe, where we put different shapes of obstacles to affect the flow, including rotating cylinders and walls constructing a folded pipe. The external forces in Eq. 13 are neglected and set to 0. The neural solver is trained to generalize to different settings of the obstacles, including the shape and the velocity on the surface as well as the inflow/outflow velocities. Then we evaluate the neural solver in 5 randomly sampled configurations in both the cylinder case and the folded pipe case. You may refer to the appendix for more details. We leverage the same configurations as those in (Wandel et al., 2020) including the discretization method, the physics constrained loss, training strategies, the input features, the predicted variables as well as the evaluation metric. Specifically, the rectangular domain is discretized into a 100 × 300 regular mesh and ∆t = 4. The physics constrained loss is used as the evaluation metric, measuring to what extent the prediction at the next time step satisfies the PDE given the current fluid state and the boundary conditions. As the fields of the fluid change much over time, we maintain a training pool initialized with a set of initial conditions and incrementally enrich it as the training goes. This is achieved because the predictions from the neural network can be seen as new data if the neural network has been well fitted in the current pool. One can refer to (Wandel et al., 2020) for more details. Wandel et al. (2020) leverages U-net as the neural solver, but to demonstrate the full potential of NeuralStagger, we also try the other two neural network architectures, i.e., FNO and LordNet (Shi et al., 2022) which also leverages the physics constrained loss to train the neural PDE solver. We directly use the trained U-net from the official open-source repository of (Wandel et al., 2020) for evaluation and train FNO and LordNet from scratch. The experiments in Table 1 show that LordNet outperforms the other two neural networks in the baseline setting without NeuralStagger. Therefore, we use LordNet for further experiments on the choice of spatial and temporal factors. We find that in this case, the information from the 100 × 100 grid (sH = 1, sW = 3) is sufficient to achieve comparable results to the U-net baseline, while larger spatial steps will introduce too much information loss. In addition, it seems increasing the temporal factors hurts the accuracy more obviously than those in the periodic boundary condition and the lid-driven boundary condition, though the accuracy is still comparable to U-net even with sT = 16. We believe this is because the dataset is incrementally explored by maintaining a training pool and enriching it with the neural network’s predictions during training. However, the predictions may not be accurate. As the physics constrained loss is defined on ût+(sT−1)∆t and ût+sT∆t, inaccurate ût+(sT−1)∆t may mislead the neural network to the wrong direction. When we increase sT , more errors will be accumulated along the sequence from ût the ût+(sT−1)∆t and the training will be harder. Designing training algorithms to better support NeuralStagger remains unexplored and we leave it for future work. In terms of speed, the choices of spatial and temporal factors lead to different levels of acceleration, as is shown in Table 1, where GMACs (multiply-accumulate Operations) per card is the average computational load of simulation for 16 timesteps. Specifically, the largest factor configuration to keep the accuracy comparable to the baseline is sT = 16, sH = 1, sW = 3, leading to the largest decrease in GMACs per card, i.e., 1/32 of the baseline U-net and 1/48 of LordNet without NeuralStagger. Specifically, when tested with A100 cards, it leads to 28× speed-up over U-net and 17× over LordNet without NeuralStagger. 4.3 APPLICATION IN OPTIMAL CONTROL To further showcase the capability of the neural solver with NeuralStagger on the inverse problem, we conduct the optimal control experiment introduced in Wandel et al. (2020). The task is to change the flow speed to control the shedding frequency of a Kármán vortex street behind an obstacle. The shedding frequency is estimated by the frequency spectrum V (f) of the y-component of the velocity field behind the obstacle over 200 time steps, denoted by E [ |V (f)|2 ] . We define the loss function L = ( E [ |V (f)|2 ] − f̂ )2 , where f̂ is the target frequency. Then we compute the gradient of the velocity with regard to the loss by auto-differentiation through the neural solver and leverage Adam optimizer (Paszke et al., 2017; Kingma & Ba, 2014) to update the velocity. We compare the result of the learned model with the setting sH = 1, sW = 3, sT = 2 to that shown in Wandel et al. (2020). As is shown in Figure 6, the velocity controlled by LordNet converges to the target velocity with fewer iterations. 5 CONCLUSION AND LIMITATION We present NeuralStagger, a general framework for accelerating the neural PDE solver trained by physics constrained loss. By spatially and temporally decomposing the learning task and training multiple lightweight neural networks, the neural solver is better paralleled and much faster with sufficient computational resources. In addition, each lightweight neural network is naturally a coarseresolution solver and they bring the flexibility of producing the solutions on multiple levels of resolution, which is important for balancing the resolution and computational resources. We discuss the choice of decomposition factors and empirically test their influence on accuracy and speed. The experiments in fluid dynamics simulation show that NeuralStagger brings an additional 10 to 100× speed-up over SOTA neural PDE solvers with mild sacrifice on accuracy. There are also several limitations to be tackled in future works. Firstly, the accuracy drops with the growing decomposition factors. A potential solution would be introducing historical states in the neural network input to make up for the information loss. Secondly, we only define the spatial decomposition over regular meshes, while it turns to the non-trivial vertex coloring problem for irregular meshes. Heuristic coloring algorithms would be useful for this problem. Thirdly, our experiments only show the generalization to different initial conditions and boundary conditions. In the future, we would like to explore the generalization to different mesh sizes. 6 APPENDIX 6.1 INFORMATION LOSS CAUSED BY SPATIAL DECOMPOSITION In this section, we provide the proof to proposition 1 in the linear model setting. In this section, we will theoretically characterize the information loss caused by spatial decomposition under the linear model setting. Note that the proof is done on the 1-dimensional diffusion equation with the explicit method for ease of understanding, but as we will see, the conclusion is the same in the case with 2 dimensions or the implicit method. We consider a simple 1d partial differential equation with Dirichlet boundary condition: ∂tu = ∆u, x ∈ Ω (15) ut(x) = ft(x), x ∈ ∂Ω (16) Discretizing the function u on grid (x1, · · · , xd), we denote ûj = u(xj). We consider the finite difference discretization: ûjt+1 − û j t δt = (ûj+1t − û j t )− (û j t − û j−1 t ) δx2 , xj ̸= {x1, xd} (17) ûjt+1 = ft+1(xj), xj = {x1, xd} (18) Given the input ût ∈ Rd and output ût+∆t ∈ Rd, the output ût+∆t is parameterized by linear model as ût+∆t = ûtW where W ∈ Rd×d denotes the learned parameters. The physics constrained loss aims to learn the parameters W ∗ of the linear model that satisfies: W ∗ = argmin W 1 N N∑ i=1 ∥ûitW − yi∥2, (19) where i denotes the index of training samples and yj = ft+1(xj), xj = {x1, xd}; yj = ûjt − δt δx2 ( (ûj+1t − û j t )− (û j t − û j−1 t ) ) , xj ̸= {x1, xd}. By applying spatial decomposition, the input and output are equally partitioned into K blocks {û1t , · · · , ûKt } and {û1t+∆t, · · · , ûKt+∆t}. Each block contains d/K coordinates Then according to the MSR loss, the optimization goal becomes: W ∗1 , · · · ,W ∗K = argmin W1,··· ,WK 1 N N∑ i=1 K∑ k=1 ∥(ûi,kt Wk − yi,k)∥2, (20) where Wk ∈ Rm×m,m = d/K for k = 1, · · · ,K. Proof:We first consider the case that ∑N i=1(û i,k t ) τ ûi,kt is full rank. The minimizer of Eq.(20) is W ∗k = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1( ∑N i=1(û i,k t ) τyi,k). We denote the matrix A = ( ∑N i=1(û i,k t ) τ ûi,kt ) −1, We construct a d×dmatrixB by lettingB(k+id/K, k+jd/K) = A(i, j), for i = 0, · · · , d/K; j = 0, · · · , d/K; otherwise, B(i, j) = 0. Then it is easy to check that the matrixB is the pseudo-inverse of ∑N i=1(û i t) τ ûit. The minimizer of Eq.(19) is (Bartlett et al., 2020)B( ∑N i=1(û t i) τyi). As the matrix B only has non-zero values on the coordinates that correspond to the k-th block, we have the k-the block of W ∗ equals W ∗k and other blocks equal zero matrices. Denoting the matrix composed of all the samples with the bold symbol without the superscript i such as ût for { ûit } and ûkt for { ûi,kt } , we have ∑N i=1(û i,k t ) τ ûi,kt = (û k t ) τ ûkt and ∑N i=1(û i t) τ ûit = (ût) τ ût. By Rank–nullity theorem, it is easy to see that rank((ût)τ ût) = rank(ût) and rank((ûkt ) τ ûkt ) = rank(û k t ). Then we get the results in the proposition. For the case that ∑N i=1(û t,k i ) τ ût,ki ≤ d/K, we can select its maximal linearly independent group to obtain its pseudo-inverse and apply similar analyses to get the results. In the case of the implicit method, the term ûitW in the physics constrained loss becomes û i tWV where V is an invertible matrix. This also does not change the conclusion. 6.2 IMPLEMENTATION DETAILS We implemented FNO with the original 2-dimensional version in the official repository, where we set the truncation mode to 12 and the width to 64. For the LordNet, we only stack 2 Lord modules and fix the channel count to 64 in all layers. In the position-wise embedding of the 2 Lord modules, we stack two 1×1 Convolutional layers, where the hidden embedding contains 256 and 128 channels separately, and GELU activation is used between the Convolutional layers. The implementation of Unet is based on the U-Net architecture (Ronneberger et al., 2015) with 20 hidden channels, which is consistent with that in (Wandel et al., 2020) The learning rates and training samples are described as follows. To keep out the potential influence of computational resources like cores and memory, we test the speed of NeuralStagger under the setting that each coarse-resolution solvers have sufficient resources to use. Therefore, we run each solver on Nvidia A100 GPUs with the batch size equals to 1. The time per step shown in Table 1 is calculated by dividing the inference time of the coarseresolution solver by the temporal factor sT . The time of decomposition and reconstruction is ignored because the operation supported by ‘pixel shuffle’ is super efficient. We also calculated GMACs (multiply-accumulate Operations) per card, which is the average computational load of simulation for 16 timesteps. Note that for the GMACs of FNO, we do not include the operation of Fourier transform. Periodic Boundary Condition We generate the data with random fields to generate a periodic function on a 64×64 grid with a time-step of 1e-2 where we record the solution every time step, where the external force is fixed f(x) = 0.1sin(2π(x + y)) + cos(2π(x + y)). For the perioidc boundary and lid-driven boundary conditions, we use the vorticity-stream function form of Eq. 13 as the physics-constrained loss. With the Helmholtz decomposition to Eq. 13, we rewrite the NavierStokes equation: ∂ω ∂t = ∂ψ ∂y ∂ω ∂x + ∂ψ ∂x ∂ω ∂y + 1 Re ( ∂2ω ∂x2 + ∂2ω ∂y2 ) (21) ∂2ψ ∂x2 + ∂2ψ ∂y2 = −ω, (22) where ω is the vorticity function, ψ is the stream function, and Re is the Reynolds number. The initial condition ω0 is generated by random field satisfying the distribution N ( 0, 83(−∆+ 64I)−4.0 ) . We use 6000 states for training. In this case, we use FNO to test NeuralStagger and decay the initial learning rate 3e-3 with a factor of 0.9 every 5000 iterations. Lid-driven Cavity boundary condition We generate data on a 64×64 but we train the neural network to predict the values of ψ inside the boundary, which is a 2-dimensional matrix of the shape (H − 2) × (W − 2). The random initial conditions are generated in the same way as the periodic boundary conditions. To make the initial state consistent with the boundary condition, we solve with the numerical solver for the first T0 = 1.98 and use ωT0 as the initial state. We use 8000 states for training with FNO, and decay the initial learning rate 3e-3 with a factor of 0.9 every 10000 iterations. Flow around Obstacles The data generation is the same as the setting used in (Wandel et al., 2020), where the resolution of the domain is 100×300, ∆t = 4, ρ = 4, µ = 0.1. In training, different types of environments are used including magnus, box, pipe, and wing. The locations and the velocity are variable during the training, e.g., the velocity is ranged from 0.0 to 1 m/s, the diameter of the obstacle is ranged from 10 to 40, and the coordinate x of the location is randomly from to 65 to 75 and the coordinate y of that is from 40 to 60. And then for the test, we randomly select the location and flow velocity to test and in our experiment, the Reynolds number of tests is 517. In this case, we train the model from scratch without any data for sT = 1. For sT > 1, we use the benchmark to pre-generate the initial sequence û0,sT for training. The learning rate is 1e-3 for Lordnet and 3e-3 for FNO, both with a factor of 0.9 every 5000 iterations. The quantitative comparison in this paper is conducted on a 100×300 grid. For the optimal control of vortex shedding experiment, the domain size is 100x300, and used the trained neural PDE solver based on the above training settings. The Reynolds number here is 880. The optimizer for both Unet and LordNet is Adam optimizer with a learning rate of 1e-3. 6.3 THE RESULTS OF THREE CASES WITH DIFFERENT SPATIAL-TEMPORAL FACTORS
1. What is the focus and contribution of the paper regarding partial differential equations? 2. What are the strengths of the proposed method, particularly in terms of efficiency and parallelization? 3. What are the weaknesses of the paper, especially regarding experimental results and comparisons with traditional methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a method to accelerate the solution of partial differential equation (PDE). Numeric methods to solve PDEs require discretizing space and time into a fine mesh, and the higher the resolution of this discretization, the higher the computation cost. The authors propose to divide the fine mesh into several coarse resolution systems, use a neural network to solve these coarse systems, and then combine them again to get the fine mesh solution. Smaller neural networks are applied in parallel, in addition to predicting multiple time step solutions in parallel, which also help in the acceleration of the approach. Strengths And Weaknesses Strength: The authors tackle an important problem of solving PDEs in an efficient manner. PDEs are ubiquitous in many natural systems modeling, they can have a great impact on science. In this work, the proposed speed-up comes from using smaller neural network for coarser resolutions and parallelization. Weaknesses: For temporal decomposition, running k parallel thread would require the solution of the first k timesteps, not the initial condition only. Traditional numeric methods only require the initial condition The improvements in results are not that compelling, although they are superior in multiplicative scale. The authors may want to test on bigger grids than 64*64 In figure 5, put the ground truth as well for comparison An ablation study with the spatial decomposition removed would be a good idea for experiment here. In its current form, it is not clear as to where the current improvements are coming from mostly, whether from spatial or temporal. Clarity, Quality, Novelty And Reproducibility The author's idea is clear, novel. No code is provided for reproducibility.
ICLR
Title Word2net: Deep Representations of Language Abstract Word embeddings extract semantic features of words from large datasets of text. Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words. Here we propose word2net, a method that replaces their linear parametrization with neural networks. For each term in the vocabulary, word2net posits a neural network that takes the context as input and outputs a probability of occurrence. Further, word2net can use the hierarchical organization of its word networks to incorporate additional meta-data, such as syntactic features, into the embedding model. For example, we show how to share parameters across word networks to develop an embedding model that includes part-of-speech information. We study word2net with two datasets, a collection of Wikipedia articles and a corpus of U.S. Senate speeches. Quantitatively, we found that word2net outperforms popular embedding methods on predicting heldout words and that sharing parameters based on part of speech further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. 1 Introduction Word embeddings are an important statistical tool for analyzing language, processing large datasets of text to learn meaningful vector representations of the vocabulary (Bengio et al., 2003; 2006; Mikolov et al., 2013b; Pennington et al., 2014). Word embeddings rely on the distributional hypothesis, that words used in the same contexts tend to have similar meanings (Harris, 1954). More informally (but equally accurate), a word is defined by the company it keeps (Firth, 1957). While there are many extensions and variants of embeddings, most rely on a log-bilinear model. This model posits that each term is associated with an embedding vector and a context vector. Given a corpus of text, these vectors are fit to maximize an objective function that involves the inner product of each observed word’s embedding with the sum of the context vectors of its surrounding words. With useful ways to handle large vocabularies, such as negative sampling (Mikolov et al., 2013a) or Bernoulli embeddings (Rudolph et al., 2016), the word embedding objective resembles a bank of coupled linear binary classifiers. Here we introduce word2net, a word embedding method that relaxes this linear assumption. Word2net still posits a context vector for each term, but it replaces each word vector with a term-specific neural network. This word network takes in the sum of the surrounding context vectors and outputs the occurrence probability of the word. The word2net objective involves the output of each word’s network evaluated with its surrounding words as input. The word2net objective resembles a bank of coupled non-linear binary classifiers. How does word2net build on classical word embeddings? The main difference is that the word networks can capture non-linear interaction effects between co-occurring words; this leads to a better model of language. Furthermore, the word networks enable us to share per-term parameters based on word-level meta-data, such as syntactic information. Here we study word2net models that share parameters based on part-of-speech (pos) tags, where the parameters of certain layers of each network are shared by all terms tagged with the same pos tag. Figure 1a illustrates the intuition behind word2net. Consider the term increase. The top of the figure shows one observation of the word, i.e., one of the places in which it appears in the data. (This excerpt is from U.S. Senate speeches.) From this observation, the word2net objective contains the probability of a binary variable wn;increase conditional on its context (i.e., the sum of the context vectors of the surrounding words). This variable is whether increase occurred at position n. Under review as a conference paper at ICLR 2018 neural network that outputs the probability of that word (Figure 1a). If we are given the tags of the words, we may use parameter sharing instead in order to form a per-word per-tag neural network (Figure 1b). Finally, we also propose a method for computing similarities between the neural network representations of the words and demonstrate that they capture semantic (and even syntactic) similarities (Figure 1c). In our empirical study, we show that parameter sharing in word2net performs better than applying word2vec or standard Benoulli embeddings on the augmented vocabulary of word/tag pairs. We also demonstrate that deep Bernoulli embeddings provide better predictive log-likelihood when compared to word2vec or standard Bernoulli embeddings. R fjrr: moved this to the introduction, needs rewriting here Word embedding models learn semantic features of words by exploiting the co-occurrence patterns of words in a collection of documents. There are many extensions and variants of word embeddings (Bengio et al., 2003; 2006; Mnih & Hinton, 2007; Mikolov et al., 2013a;b;c; Pennington et al., 2014; Mnih & Teh, 2012; Mnih & Kavukcuoglu, 2013; Levy & Goldberg, 2014; Vilnis & McCallum, 2015; Barkan, 2016; Bamler & Mandt, 2017). Most of these approaches rely on a log-bilinear model, in which the emission probabilities depend on a dot product of the word embedding vectors and the context vectors, as opposed to the deep neural network architectures proposed by Bengio et al. (2003; 2006) and Mnih & Hinton (2007). Our model di ers from these deep neural network architectures in two ways. First, we have a separate network for each vocabulary word, instead of a single network that outputs the logits for all words in the vocabulary. Our perspective of a bank of parallel binary classification problems allows for faster optimization of the networks. Second, our architecture enables incorporating side information (such as part of speech tags) in specific layers of the network. Recall that word embeddings (without any further structure) tend to capture semantic properties of the words, and the syntactic properties they encode are typically redundant (Andreas & Klein, 2014), so there is room for improvement with a model that allows for additional syntactic structure. We adopt the perspective of exponential family embeddings (Rudolph et al., 2016), which extend word embeddings to datasets beyond text. There are also some variants and extensions of exponential family embeddings (Rudolph & Blei, 2017; Rudolph et al., 2017; Liu & Blei, 2017; Liu et al., 2017), but they all have in common an exponential family likelihood whose natural parameter is determined 2 The idea behind word2net is hat the conditio al probability ofwn;increase is the output of a multi-layer network that takes the context as input. Each layer of the network transforms the context into a new hidden representation, reweighting t e latent feature according to their relevance for predicting the occurrence of increa e. Note that not illustrated are th 0-vari bles, i.e., the negative samples, which correspond to words that are not at position n. In word2net, their probabilities also come from their corresponding word networks. Now suppose we have tagged the corpus with pos. Figure 1b shows how to incorporate this syntactic information into word2net. The network is specific to increase as a noun (as opposed to a verb). The paramete s of the fir layer ( range) ar hared among all nouns i the coll ction; the other layers (blue) are specific to increase. Thus, the networks for increase/nou and increas /verb differ in how the first layer promotes the latent aspects of the context, i.e., according to which context features are more relevant for each pos tag. This model further lets us consider these two pos tags separately. Figure 1c shows the most similar words to each sense of increase; the method correctly picks out tagged words related to the ver and rela ed to the noun. Below, we develop th details of word2net a d study its performance with two datasets, a coll ction of Wikipedia articles and a corpus of U.S. Senate peech s. We found hat word2net outperforms popular embedding methods on predicting held-out words, and that sharing parameters based on pos further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. Related work. Word2net builds on word embeddings methods. Though originally designed as deep neural network rchitecture (Bengio et al., 2003; 2006; Mnih & Hinton, 2007), most applications of word embeddings now rely on log-bilinear models (Mikolov et al., 2013a;b;c; Pennington et al., 2014; Mnih & Teh, 2012; Mnih & Kavukcuoglu, 2013; Levy & Goldberg, 2014; Vilnis & McCallum, 2015; Barkan, 2016; Bamler &Mandt, 2017). The key innovation behind word2net is that it represents words with functions, instead of vectors (Rumelhart et al., 1986) or distributions (Vilnis & McCallum, 2015). Word2net keeps context vectors, but it replaces the embedding vector with a neural network. Previous work has also used deep neural networks for word embeddings (Bengio et al., 2003; 2006; Mnih & Hinton, 2007); these methods use a single network that outputs the unnormalized log probabilities for all words in the vocabulary. Word2net takes a different strategy: it has a separate network for each vocabulary word. Unlike the previous methods, word2net’s approach helps maintain the objective as a bank of binary classifiers, which allows for faster optimization of the networks. To develop word2net, we adopt the perspective of exponential family embeddings (Rudolph et al., 2016), which extend word embeddings to data beyond text. There are several extensions to exponential family embeddings (Rudolph & Blei, 2017; Rudolph et al., 2017; Liu & Blei, 2017), but they all have in common an exponential family likelihood whose natural parameter has a log-bilinear form. Word2net extends this framework to allow for non-linear relationships. Here we focus on Bernoulli embeddings, which are related to word embeddings with negative sampling, but our approach easily generalizes to other exponential family distributions (e.g., Poisson). Finally, word embeddings can capture semantic properties of the word, but they tend to neglect most of the syntactic information (Andreas & Klein, 2014). Word2net introduces a simple way to leverage the syntactic information to improve the quality of the word representations. 2 Word2Net In this section we develop word2net as a novel extension of Bernoulli embeddings (Rudolph et al., 2016). Bernoulli embeddings are a conditional model of text, closely related to word2vec. Specifically, they are related to continuous bag-of-words (cbow) with negative sampling.1 Wefirst reviewBernoulli embeddings and then we present word2net as a deep Bernoulli embedding model. 2.1 Background: Bernoulli embeddings Exponential family embeddings learn an embedding vector v 2 RK and a context vector ˛v 2 RK for each unique term in the vocabulary, v D 1; : : : ; V . These vectors encode the semantic properties of words, and they are used to parameterize the conditional probability of a word given its context. Specifically, let wn be the V -length one-hot vector indicating the word at location n, such that wnv D 1 for one term (vocabulary word) v, and let cn be the indices of the words in a fixed-sized window centered at location n (i.e., the indices of the context words). Exponential family embeddings parameterize the conditional probability of the target word given its context via a linear combination of the embedding vector and the context vectors, p.wnv j cn/ D Bernoulli . >v ˙n/ ; with ˙n , X v02cn ˛v0 : (1) Here, .x/ D 1 1Ce x is the sigmoid function, and we have introduced the notation ˙n for the sum of the context vectors at location n. Note that Eq. 1 does not impose the constraint that the sum over the vocabulary words P v p.wnv D 1 j cn/ must be 1. This significantly alleviates the computational complexity (Mikolov et al., 2013b; Rudolph et al., 2016). This type of exponential family embedding is called Bernoulli embedding, named for its conditional distribution. In Bernoulli embeddings, our goal is to learn the embedding vectors v and the context vectors ˛v from the text by maximizing the log probability of words given their contexts. The data contains N pairs .wn; cn/ of words and their contexts, and thus we can form the objective function L. ; ˛/ as the sum of logp.wnv j cn/ for all instances and vocabulary words. The resulting objective can be seen as a bank of V binary classifiers, where V is the vocabulary size. To see that, we make use of Eq. 1 and express the objective L. ; ˛/ as a sum over vocabulary words, L. ; ˛/ D NX nD1 VX vD1 logp.wnv j cn/ D VX vD1 0@ X nW wnvD1 log . >v ˙n/C X nW wnvD0 log . >v ˙n/ 1A : (2) If we hold all the context vectors ˛v fixed, then Eq. 2 is the objective of V independent logistic regressors, each predicting whether a word appears in a given context or it does not. The positive examples are those where word v actually appeared in a given context; the negative examples are those where v did not appear. It is the context vectors that couple the V binary classifiers together. In practice, we need to either downweight the contribution of the zeros in Eq. 2, or subsample the set of negative examples for each n (Rudolph et al., 2016). We follow the latter case here, which leads to negative sampling (Mikolov et al., 2013b). (See the connection in more detail in Appendix B.) 1See Appendix B for more details on the connections. 2.2 Word2Net as a deep Bernoulli embedding model Word2net replaces the linear classifiers in Eq. 2 with non-linear classifiers. In particular, we replace the linear combination >v ˙n with a neural network that is specific to each vocabulary word v, so that p.wnv D 1 j cn/ D f .˙nI ˇv/ ; (3) where f . I ˇv/ W RK ! R is a feed-forward neural network with parameters (i.e., weights and intercepts) ˇv . The number of neurons of the input layer is K, equal to the length of the context vectors ˛v . Essentially, we have replaced the per-term embedding vectors v with a per-term neural network ˇv . We refer to the per-term neural networks as word networks. The word2net objective is the sum of the log conditionals, Lword2net. ; ˛/ D VX vD1 0@ X nW wnvD1 log f .˙nI ˇv/ C X nW wnvD0 log f .˙nI ˇv/ 1A ; (4) where we choose the function f . I ˇv/ to be a three-layer neural network,2 h.1/nv D tanh ˙>n ˇ .1/ v ; h.2/nv D tanh .h.1/nv / >ˇ.2/v ; f .˙nI ˇv/ D .h .2/ nv / >ˇ.3/v : (5) Replacing vectors with neural networks has several implications. First, the bank of binary classifiers has additional model capacity to capture nonlinear relationships between the context and the cooccurrence probabilities. Specifically, each layer consecutively transforms the context to a different representation until the weight matrix at the last layer can linearly separate the real occurrences of the target word from the negative examples. Second, for a fixed dimensionality K, the resulting model has more parameters.3 This increases the model capacity, but it also increases the risk of overfitting. Indeed, we found that without extra regularization, the neural networks may easily overfit to the training data. We regularize the networks via either weight decay or parameter sharing (see below). In the empirical study of Section 3 we show that word2net fits text data better than its shallow counterparts and that it captures semantic similarities. Even for infrequent words, the learned semantic representations are meaningful. Third, we can exploit the hierarchical structure of the neural network representations via parameter sharing. Specifically, we can share the parameters of a specific layer of the networks of different words. This allows us to explicitly account for pos tags in our model (see below). Regularization through parameter sharing enables the use of pos tags. One way to regularize word2net is through parameter sharing. For parameter sharing, each word is assigned to one of T groups. Importantly, different occurrences of a term may be associated to different groups. We share specific layers of the word networks among words in the same group. In this paper, all neural network representations have 3 layers. We use index ` 2 f1; 2; 3g to denote the layer at which we apply the parameter sharing. Then, for each occurrence of term v in group t we set ˇ.`/v D ˇ.`/t . Consider now two extreme cases. First, for T D 1 group, we have a strong form of regularization by forcing all word networks to share the parameters of layer `. The number of parameters for layer ` has been divided by the vocabulary size, which implies a reduction in model complexity that might help prevent overfitting. This parameter sharing structure does not require side information and hence can be applied to any text corpus. In the second extreme case, each word is in its own group and T D V . This set-up recovers the model of Eqs. 4 and 5, which does not have parameter sharing. When we have access to a corpus annotated with pos tags, parameter sharing lets us use the pos information to improve the capability of word2net by capturing the semantic structure of the data. Andreas & Klein (2014) have shown that word embeddings do not necessarily encode much syntactic information, and it is still unclear how to use syntactic information to learn better word embeddings. The main issue is that many words can appear with different tags; for example, fish can be both a noun and refer to the animal or a verb and refer to the activity of catching the animal. On the one hand, both meanings are related. On the other hand, they may have differing profiles of which 2Three layers performed well in our experiments, allowing for parameter sharing to include pos tags. 3For fairness, in Section 3 we also compare to shallow models with the same number of parameters. contexts they appear in. Ideally, embedding models should be able to capture the difference. However, the simple approach of considering fish/noun and fish/verb as separate terms fails because there are few occurrences of each individual term/tag pair. (We show that empirically in Section 3.) Exploiting the hierarchical nature of the network representations of word2net, we incorporate pos information through parameter sharing as follows. Assume that for location n in the text we have a one-hot vector sn 2 f0; 1gT indicating the pos tag. To model the observation at position n, we use a neural network specific to that term/tag combination, p.wnv D 1; snt D 1 j cn/ D f ˙nI ˇ .:`/ v ; ˇ .`/ t : (6) That is, the neural network parameters are combined to form a neural network in which layer ` has parameters ˇ.`/t and the other layers have parameters ˇ .:`/ v . Thus, we leverage the information about the pos tag t by replacing ˇ.`/v with ˇ.`/t in layer `, resulting in pos parameter sharing at that layer. If the same term v appears at a different position n0 with a different pos tag t 0, at location n0 we replace the parameters ˇ.`/v of layer ` with ˇ.`/t 0 . Figure 1b illustrates pos parameter sharing at ` D 1. Even though now we have a function f . / for each term/tag pair, the number of parameters does not scale with the product V T ; indeed the number of parameters of the network with pos information is smaller than the number of parameters of the network without side information (Eq. 5). The reason is that the number of parameters necessary to describe one of the layers has been reduced from V to T due to parameter sharing (the other layers remain unchanged). Finally, note that we have some flexibility in choosing which layer is tag-specific and which layers are word-specific. We explore different combinations in Section 3, where we show that word2net with pos information improves the performance of word2net. The parameter sharing approach extends to side information beyond pos tags, as long as the words can be divided into groups, but we focus on parameter sharing across all words (T D 1) or across pos tags. Semantic similarity of word networks. In standard word embeddings, the default choice to compute semantic similarities between words is by cosine distances between the word vectors. Since word2net replaces the word vectors with word networks, we can no longer apply this default choice. We next describe the procedure that we use to compute semantic similarities between word networks. After fitting word2net, each word is represented by a neural network. Given that these networks parameterize functions, we design a metric that accounts for the fact that two functions are similar if they map similar inputs to similar outputs. So the intuition behind our procedure is as follows: we consider a set of K-dimensional inputs, we evaluate the output of each neural network on this set of inputs, and then we compare the outputs across networks. For the inputs, we choose the V context vectors, which we stack together into a matrix ˛ 2 RV K . We evaluate each network f . / row-wise on ˛ (i.e., feeding each ˛v as a K-dimensional input to obtain a scalar output), obtaining a V -dimensional summary of where the network f . / maps the inputs. Finally, we use the cosine distance of the outputs to compare the outputs across networks. In summary, we obtain the similarity of two words w and v as dist .w; v/ D f .˛I ˇw/ >f .˛I ˇv/ jjf .˛I ˇw/jj2 jjf .˛I ˇv/jj2 : (7) If we are using parameter sharing, we can also compare pos-tagged words; e.g., we may ask how similar is fish/noun to fish/verb. The two combinations will have different representations under the word2net method trained with pos-tag sharing. Assuming that layer ` is the shared layer, we compute the semantic similarity between the word/tag pair Œw; t and the pair Œv; s as dist.Œw; t ; Œv; s / D f .˛I ˇ .:`/ w ; ˇ .`/ t / >f .˛I ˇ .:`/ v ; ˇ .`/ s / jjf .˛I ˇ .:`/ w ; ˇ .`/ t /jj2 jjf .˛I ˇ .:`/ v ; ˇ .`/ s /jj2 : (8) 3 Empirical results In this section we study the performance of word2net on two datasets, Wikipedia articles and Senate speeches. We show that word2net fits held-out data better than existing models and that the learned network representations capture semantic similarities. Our results also show that word2net is superior at incorporating syntactic information into the model, which improves both the predictions and the quality of the word representations. Data. We use word2net to study two data sets, both with and without pos tags: Wikipedia: The text8 corpus is a collection of Wikipedia articles, containing 17M words. We form a vocabulary with the 15K most common terms, replacing less frequent terms with the unknown token. We annotate text8 using the nltk pos tagger and the universal tagset.4 Table 7 in Appendix C shows a description of the tagset. We also form a tagged dataset in which each term/tag combination has a unique token, resulting in a vocabulary of 49K tagged terms. Senate speeches: These are the speeches given in the U.S. Senate in the years 1916-2009. The data is a transcript of spoken language and contains 24M words. Similarly as above, we form a vocabulary of 15K terms. We annotate the text using the Stanford CoreNLP pos tagger (Manning et al., 2014), and we map the tags to the universal tagset. We form a tagged dataset with 38K tagged terms. Table 1 summarizes the information about both corpora. We split each dataset into a training, a validation, and a test set, which respectively contain 90%, 5%, and 5% of the words. Additional details on preprocessing are in Appendix C. Methods. We compare word2net to its shallow counterpart, the cbow model (Mikolov et al., 2013b), which is equivalent to Bernoulli embeddings (b-emb)5 (Rudolph et al., 2016). We also compare with the skip-gram model.6 (Mikolov et al., 2013b) We run b-emb/cbow and skip-gram on the data and also on the augmented data of pos-tagged terms. In detail, the methods we compare are: b-emb/cbow: Learns vector representations for each word (or tagged word) by optimizing Eq. 2. Skip-gram: Learns vector representations for each word (or tagged word) by optimizing Eq. 12. Word2net: Learns a neural network representation for each word by optimizing Eq. 4. We study the following parameter sharing schemes: 1. pos pos pos all all all all all all : no parameter sharing. 2. pos pos pos all all all all all all : layer ` shared between all networks. 3. s pos all all all all all all : layer ` shared between terms with the same part-of-speech (pos) tag. For word2net, we experiment with the context dimensions K 2 f20; 100g. The context dimension is also the dimension of the input layer. For K D 20, we useH1 D 10 hidden units in the first hidden layer of each word network and H2 D 10 hidden units in the second layer. For K D 100, we use H1 D H2 D 20 hidden units. Without parameter sharing, the number of parameters per word is K C KH1 CH1H2 CH2. The shallow models have 2K parameters per term (the entries of the context nd word vectors). Since we want to compare models both in terms of context dimension K and in terms of total parameters, we fit the methods with K 2 f20; 165; 100; 1260g. We experiment with context sizes jcnj 2 f2; 4; 8g and we train all methods using stochastic gradient descent (sgd) (Robbins & Monro, 1951) with jSnj D 10 negative samples on the Wikipedia data and with jSnj D 20 negative samples on the Senate speeches. We use l2 regularization with standard deviation 10 for the word and context vectors, as well as weight decay for the neural networks. We use Adam (Kingma & Ba, 2015) with Tensorflow’s default settings (Abadi et al., 2016) to train all methods for up to 30000 iterations, using a minibatch size of 4069 or 1024. We assess convergence by monitoring the loss on a held-out validation set every 50 iterations, and we stop training when the average validation loss starts increasing. We initialize and freeze the context vectors of the word2net methods with the context vectors from a pretrained Bernoulli embedding with the same context dimension K. Network parameters are initialized according to standard initialization schemes of 4See http://nltk.org. 5See Appendix B for the detailed relationship between b-emb and cbow with negative sampling. 6The skip-gram objective is related to cbow/b-emb through Jensen’s inequality (see Appendix B). Table 2: Word2net outperforms existing word embedding models (skip-gram and b-emb/cbow) in terms of test log-likelihood on the Wikipedia data, both with and without pos tags. We compare models with the same context dimensionK and the same total number of parameters p=V for different context sizes (cs). (Results on more configurations are in Appendix A.) For word2net, we study different parameter sharing schemes, and the color coding indicates which layer is shared and how, as in Figure 1. Parameter sharing improves the performance of word2net, especially with pos tags. vocabulary K p=V cs 2 cs 4 cs 8 Mikolov et al. (2013b): skip-gram words 20 40 1:061 1:062 1:071 skip-gram tagged words 20 240 2:994 3:042 3:042 Mikolov et al. (2013b); Rudolph et al. (2016): b-emb/cbow words 20 40 1:023 0:976 0:941 b-emb/cbow words 165 330 1:432 1:388 1:381 b-emb/cbow tagged words 20 240 1:411 1:437 1:461 this work: sharing word2net pos pos pos all all all all all all 20 330 0:940 0:912 0:937 word2net pos pos pos all all all 20 120 1:040 1:003 0:964 word2net p s p s p s ll ll ll all all all 20 230 1:191 1:141 1:111 word2net pos pos pos all all all ll ll ll 20 320 0:863 0:881 0:890 word2net pos pos pos all all all all all all 20 120 0:918 0:914 0:871 word2net s s s ll ll ll ll 20 230 0:844 0:801 0:793 word2net pos pos pos all all all all all all 20 320 0:840 0:822 0:862 feed-forward neural networks (Glorot & Bengio, 2010), i.e., the weights are initialized from a uniform distribution with bounds˙ p 6= p Hin CHout. Quantitative results: Word2net has better predictive performance. We compute the predictive log-likelihood of the words in the test set, logp.wnv j cn/. For skip-gram, which was trained to predict the context words from the target, we average the context vectors ˛v for a fair comparison.7 Table 2 shows the results for the Wikipedia dataset. We explore different model sizes: with the same number of parameters as word2net, and with the same dimensionality K of the context vectors. For word2net, we explore different parameter sharing approaches. Table 5 in Appendix A shows the results for other model sizes (includingK D 100). In both tables, word2net without parameter sharing performs at least as good as the shallow models. Importantly, the performance of word2net improves with parameters sharing, and it outperforms the other methods. Tables 2 and 5 also show that b-emb/cbow and skip-gram perform poorly when we incorporate pos information by considering an augmented vocabulary of tagged words. The reason is that each term becomes less frequent, and these approaches would require more data to capture the cooccurrence patterns of tagged words. In contrast, word2net with pos parameter sharing provides the best predictions across all methods (including other versions of word2net). Finally, Table 6 in Appendix A shows the predictive performance for the U.S. Senate speeches. On this corpus, skip-gram performs better than b-emb/cbow and word2net without parameter sharing; however, word2net with pos sharing also provides the best predictions across all methods. Qualitative results: Word2net captures similarities and leverages syntactic information. Table 3 displays the similarity between word networks (trained on Wikipedia with parameter sharing at layer ` D 1), compared to the similarities captured by word embeddings (b-emb/cbow). For each query word, we list the three most similar terms, according to the learned representations. The word vectors are compared using cosine similarity, while the word networks are compared using Eq. 7. The table shows that word2net can capture latent semantics, even for less frequent words such as parrot. Table 4 shows similarities of models trained on the Senate speeches. In particular, the table compares: b-emb/cbow without pos information, b-emb/cbow trained on the augmented vocabulary of tagged words, and word2net with pos parameter sharing at the input layer (` D 1). We use Eq. 8 to compute the similarity across word networks with pos sharing. We can see that word2net is superior at incorporating syntactic information into the learned representations. For example, the most similar 7If we do not average, the held-out likelihood of skip-gram becomes worse. networks to the pronoun me are other pronouns such as myself, my, and himself. Word networks are often similar to other word networks with the same pos tag, but we also see some variation. One such example is in Figure 1c, which shows that the list of the 10 most similar words to the verb increase contains the adjective half. 4 Discussion We have presented word2net, a method for learning neural network representations of words. The word networks are used to predict the occurrence of words in small context windows and improve prediction accuracy over existing log-bilinear models. We combine the context vectors additively, but this opens the door for future research directions in which we explore other ways of combining the context information, such as accounting for the order of the context words and their pos tags. We have also introduced parameter sharing as a way to share statistical strength across groups of words and we have shown empirically that it improves the performance of word2net. Another opportunity for future work is to explore other types of parameter sharing besides pos sharing, such as sharing layers across documents or learning a latent group structure together with the word networks. A Additional results For completeness, we show here some additional results that we did not include in the main text for space constraints. In particular, Table 5 compares the test log-likelihood of word2net with the competing models— namely, skip-gram and b-emb/cbow. All methods are trained with negative sampling, as described in the main text. This table shows the results for the Wikipedia dataset, similarly to Table 2, but it includes other model sizes (i.e., another value of K). In this table, word2net with no parameter sharing performs similarly to b-emb/cbow with the same number of parameters, but its performance can be further improved with part-of-speech (pos) parameter sharing. Table 6 shows the test log-likelihood for the U.S. Senate speeches. Here, skip-gram is the best method that does not use pos tags, but it is outperformed by word2net with pos parameter sharing. all all all all ll ll B Relation between Bernoulli embeddings and word2vec Word2vec (Mikolov et al., 2013b) is one of the most widely used method for learning vector representations of words. There are multiple ways to implement word2vec. First, there is a choice of the objective. Second, there are several ways of how to approximate the objective to get a scalable algorithm. In this section, we describe the two objectives, continuous bag-of-words (cbow) and skip-gram, and we focus on negative sampling as the method of choice to achieve scalability. We describe the similarities and differences between Bernoulli embeddings (Rudolph et al., 2016) and these two objectives. In summary, under certain assumptions Bernoulli embeddings are equivalent to cbow with negative sampling, and are related to skip-gram through Jensen’s inequality. b-emb cbow (negative sampling) First we explain how Bernoulli embeddings and cbow with negative sampling are related. Consider the Bernoulli embedding full objective, L. ; ˛/ D X n 0@ X vW wnvD1 log . >v ˙n/C X vW wnvD0 log . >v ˙n/ 1A : (9) In most cases, the summation over negative examples (wnv D 0) is computationally expensive to compute. To address that, we form an unbiased estimate of that term by subsampling a random set Sn Table 6: Comparison of the test log-likelihood across different models on the Senate speeches. We compare models with the same context dimension K and the same total number of parameters p=V for different context sizes (“cs”). For word2net, we explore different parameter sharing schemes. The color coding of the parameter sharing (same as Figure 1) indicates which layer is shared and how. vocabulary K p=V cs 2 cs 4 cs 8 Mikolov et al. (2013b): skip-gram words 20 40 1:052 1:080 1:061 skip-gram tagged words 20 240 1:175 1:199 1:227 Mikolov et al. (2013b); Rudolph et al. (2016): b-emb/cbow words 20 40 1:274 1:246 1:222 b-emb/cbow tagged words 20 240 1:352 1:340 1:339 b-emb/cbow words 165 330 1:735 1:734 1:744 this work: sharing word2net pos pos pos all all all all all all 20 330 1:406 1:555 1:401 word2net pos pos pos all all all 20 120 1:276 1:256 1:243 word2net p s p s p s ll ll ll all all all 20 230 1:462 1:435 1:413 word2net pos pos pos all all all all all all 20 120 0:873 0:860 0:850 word2net s s s ll ll ll ll 20 230 1:057 1:034 1:015 of terms and rescaling by V 1 jSnj , bL. ; ˛/ DX n 0@ X vW wnvD1 log . >v ˙n/C V 1 jSnj X v2Sn log . >v ˙n/ 1A : (10) Here, we have introduced an auxiliary coefficient . The estimate is unbiased only for D 1; however, Rudolph et al. (2016) showed that downweighting the contribution of the zeros works better in practice.8 In particular, if we set the downweight factor as D jSnj V 1 , we recover the objective of cbow with negative sampling, bL. ; ˛/ DX n 0@ X vW wnvD1 log . >v ˙n/C X v2Sn log . >v ˙n/ 1A LCBOW. ; ˛/ (11) There are two more subtle theoretical differences between both. The first difference is that Bernoulli embeddings include a regularization term for the embedding vectors, whereas cbow does not. The second difference is that, in Bernoulli embeddings, we need to draw a new set of negative samples Sn at each iteration of the gradient ascent algorithm (because we form a noisy estimator of the downweighted objective). In contrast, in cbow with negative sampling, the samples Sn are drawn once in advance and then hold fixed. In practice, for large datasets, we have not observed significant differences in the performance of both approaches. For simplicity, we draw the negative samples Sn only once. cbow (negative sampling) skip-gram (negative sampling) Now we show how cbow and skip-gram are related (considering negative sampling for both). Recall that the objective of cbow is to predict a target word from its context, while the skip-gram objective is to predict the context from the target word. Negative sampling breaks the multi-class constraint that the sum of the probability of each word must equal one, and instead models probabilities of the individual entries of the one-hot vectors representing the words. When we apply negative sampling, the cbow objective becomes Eq. 11. The skip-gram objective is given by Lskip-gram. ; ˛/ D X .n;v/W wnvD1 0@X v02cn log >v ˛v0 C X v02Sn log >v ˛v0 1A ; (12) 8This is consistent with the approaches in recommender systems (Hu et al., 2008). That is, for each target term wnv , the cbow objective has one term while the skip-gram objective has jcnj terms. Consider a term .n; v/ for which wnv D 1. We take the corresponding cbow term from Eq. 11 and we apply Jensen’s inequality to obtain the corresponding skip-gram term in Eq. 12: log . >v ˙n/ D log 0@ >v X v02cn ˛v0 1A X v02cn log >v ˛v0 : (13) Here, we have made use of the concavity of the log . / function. In general, this is a consequence of the convexity of the log-normalizer of the (Bernoulli) exponential family distribution. This holds for the “positive” examples wnv . As for the negative examples (wnv D 0), the comparison is not as straightforward, because the choice of terms in Eqs. 11 and 12 is not exactly the same. In particular, Eq. 11 holds v0 fixed and draws v from the noise distribution, while Eq. 12 holds v fixed and draws v0 from the noise distribution. C Data preprocessing In this paper we study Wikipedia articles (text8) and a corpus of U.S. Senate speeches. On both corpora, we restrict the vocabulary to the 15K most frequent words, replacing all the remaining words with a designated token. We annotate the data using nltk tagger9 or the Stanford CoreNLP tagger (Manning et al., 2014), using the universal tagset shown in Table 7. The Senate speeches contain a lot of boilerplate repetitive language; for this reason, we tokenize around 350 frequent phrases, such as senator from alabama or united states, considering the entire phrase an individual vocabulary term. We apply the pos tagger before this tokenization step, and then we assign the noun tag to all phrases. We split the data into training (90%), testing (5%), and validation (5%) sets. We use the validation set to assess convergence, as explained in the main text. We subsample the frequent words following Mikolov et al. (2013b); i.e., each word wn in the training set is discarded with probability Prob.wn is discarded/ D 1 s t frequency.wn/ ; (14) where frequency.wn/ denotes the frequency of word wn, and t D 10 5. For each method, we use jSnj D 10 negative samples on the Wikipedia articles and jSnj D 20 negative samples on the Senate speeches. Following Mikolov et al. (2013b), we draw the negative samples from the unigram distribution raised to the power of 0:75. 9See http://nltk.org
1. What is the main contribution of the paper, and how does it extend the previous work on SGNS? 2. What is the purpose of incorporating POS tags in the proposed method, and how does it improve the performance? 3. Why does the reviewer have reservations about the execution of the work, particularly regarding the choice of experiments? 4. How does the use of POS tags in the proposed method compare to other approaches that capture syntactic information? 5. What is the significance of the paper's contributions, and how do they impact the field of natural language processing?
Review
Review The paper extends SGNS as follows. In SGNS, each word x is associated with vectors a_x and r_x. Given a set of context words C, the model calculates the probability that the target word is x by a dot product between a_x and the average of {r_c: c in C}. The paper generalizes this computation to an arbitrary network: now each word x is associated with some network N_x whose input is a set of context words C and the output is the aforementioned probability. This is essentially an architectural change: from a bag-of-words model to a (3-layer) feedforward model. Another contribution of the paper is a new form of regularization by tying a subset of layers between different N_x. In particular, the paper considers incorporating POS tags by tying within each POS group. For instance, the parameters of the first layer are shared across all noun words. (This assumes that POS tags are given.) While this is a natural extension to word2vec, the reviewer has some reservations about the execution of this work. Word embeddings are useful in large part because they can be used to initialize the parameters of a network. None of the chosen experiments shows this. Improvement in the log likelihood over SGNS is somewhat obvious because there are more parameters. The similarity between "words" now requires a selection of context vectors (7) which is awkward/arbitrary. The use of POS tags is not very compelling (though harmless). It's not necessary: contrary to the claim in the paper, word embeddings captures syntactic information if the context width is small and/or context information is provided. A more sensible experiment would be to actually plug in the entire pretrained word nets into an external model and see how much they help. EDIT: It's usually the case that even if the number of parameters is the same, extra nonlinearity results in better data fitting (e.g., Berg-Kirkpatrick et al, 2010), it's still not unexpected. All of this is closely addressed in the following prior work: Learning to Embed Words in Context for Syntactic Tasks (Tu et al., 2017) Quality: Natural but questionable extension, see above. Clarity: Clear. Originality: Acceptable, but a very similar idea of embedding contexts is presented in Tu et al. (2017) which is not cited. Significance: Minor/moderate, see above.
ICLR
Title Word2net: Deep Representations of Language Abstract Word embeddings extract semantic features of words from large datasets of text. Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words. Here we propose word2net, a method that replaces their linear parametrization with neural networks. For each term in the vocabulary, word2net posits a neural network that takes the context as input and outputs a probability of occurrence. Further, word2net can use the hierarchical organization of its word networks to incorporate additional meta-data, such as syntactic features, into the embedding model. For example, we show how to share parameters across word networks to develop an embedding model that includes part-of-speech information. We study word2net with two datasets, a collection of Wikipedia articles and a corpus of U.S. Senate speeches. Quantitatively, we found that word2net outperforms popular embedding methods on predicting heldout words and that sharing parameters based on part of speech further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. 1 Introduction Word embeddings are an important statistical tool for analyzing language, processing large datasets of text to learn meaningful vector representations of the vocabulary (Bengio et al., 2003; 2006; Mikolov et al., 2013b; Pennington et al., 2014). Word embeddings rely on the distributional hypothesis, that words used in the same contexts tend to have similar meanings (Harris, 1954). More informally (but equally accurate), a word is defined by the company it keeps (Firth, 1957). While there are many extensions and variants of embeddings, most rely on a log-bilinear model. This model posits that each term is associated with an embedding vector and a context vector. Given a corpus of text, these vectors are fit to maximize an objective function that involves the inner product of each observed word’s embedding with the sum of the context vectors of its surrounding words. With useful ways to handle large vocabularies, such as negative sampling (Mikolov et al., 2013a) or Bernoulli embeddings (Rudolph et al., 2016), the word embedding objective resembles a bank of coupled linear binary classifiers. Here we introduce word2net, a word embedding method that relaxes this linear assumption. Word2net still posits a context vector for each term, but it replaces each word vector with a term-specific neural network. This word network takes in the sum of the surrounding context vectors and outputs the occurrence probability of the word. The word2net objective involves the output of each word’s network evaluated with its surrounding words as input. The word2net objective resembles a bank of coupled non-linear binary classifiers. How does word2net build on classical word embeddings? The main difference is that the word networks can capture non-linear interaction effects between co-occurring words; this leads to a better model of language. Furthermore, the word networks enable us to share per-term parameters based on word-level meta-data, such as syntactic information. Here we study word2net models that share parameters based on part-of-speech (pos) tags, where the parameters of certain layers of each network are shared by all terms tagged with the same pos tag. Figure 1a illustrates the intuition behind word2net. Consider the term increase. The top of the figure shows one observation of the word, i.e., one of the places in which it appears in the data. (This excerpt is from U.S. Senate speeches.) From this observation, the word2net objective contains the probability of a binary variable wn;increase conditional on its context (i.e., the sum of the context vectors of the surrounding words). This variable is whether increase occurred at position n. Under review as a conference paper at ICLR 2018 neural network that outputs the probability of that word (Figure 1a). If we are given the tags of the words, we may use parameter sharing instead in order to form a per-word per-tag neural network (Figure 1b). Finally, we also propose a method for computing similarities between the neural network representations of the words and demonstrate that they capture semantic (and even syntactic) similarities (Figure 1c). In our empirical study, we show that parameter sharing in word2net performs better than applying word2vec or standard Benoulli embeddings on the augmented vocabulary of word/tag pairs. We also demonstrate that deep Bernoulli embeddings provide better predictive log-likelihood when compared to word2vec or standard Bernoulli embeddings. R fjrr: moved this to the introduction, needs rewriting here Word embedding models learn semantic features of words by exploiting the co-occurrence patterns of words in a collection of documents. There are many extensions and variants of word embeddings (Bengio et al., 2003; 2006; Mnih & Hinton, 2007; Mikolov et al., 2013a;b;c; Pennington et al., 2014; Mnih & Teh, 2012; Mnih & Kavukcuoglu, 2013; Levy & Goldberg, 2014; Vilnis & McCallum, 2015; Barkan, 2016; Bamler & Mandt, 2017). Most of these approaches rely on a log-bilinear model, in which the emission probabilities depend on a dot product of the word embedding vectors and the context vectors, as opposed to the deep neural network architectures proposed by Bengio et al. (2003; 2006) and Mnih & Hinton (2007). Our model di ers from these deep neural network architectures in two ways. First, we have a separate network for each vocabulary word, instead of a single network that outputs the logits for all words in the vocabulary. Our perspective of a bank of parallel binary classification problems allows for faster optimization of the networks. Second, our architecture enables incorporating side information (such as part of speech tags) in specific layers of the network. Recall that word embeddings (without any further structure) tend to capture semantic properties of the words, and the syntactic properties they encode are typically redundant (Andreas & Klein, 2014), so there is room for improvement with a model that allows for additional syntactic structure. We adopt the perspective of exponential family embeddings (Rudolph et al., 2016), which extend word embeddings to datasets beyond text. There are also some variants and extensions of exponential family embeddings (Rudolph & Blei, 2017; Rudolph et al., 2017; Liu & Blei, 2017; Liu et al., 2017), but they all have in common an exponential family likelihood whose natural parameter is determined 2 The idea behind word2net is hat the conditio al probability ofwn;increase is the output of a multi-layer network that takes the context as input. Each layer of the network transforms the context into a new hidden representation, reweighting t e latent feature according to their relevance for predicting the occurrence of increa e. Note that not illustrated are th 0-vari bles, i.e., the negative samples, which correspond to words that are not at position n. In word2net, their probabilities also come from their corresponding word networks. Now suppose we have tagged the corpus with pos. Figure 1b shows how to incorporate this syntactic information into word2net. The network is specific to increase as a noun (as opposed to a verb). The paramete s of the fir layer ( range) ar hared among all nouns i the coll ction; the other layers (blue) are specific to increase. Thus, the networks for increase/nou and increas /verb differ in how the first layer promotes the latent aspects of the context, i.e., according to which context features are more relevant for each pos tag. This model further lets us consider these two pos tags separately. Figure 1c shows the most similar words to each sense of increase; the method correctly picks out tagged words related to the ver and rela ed to the noun. Below, we develop th details of word2net a d study its performance with two datasets, a coll ction of Wikipedia articles and a corpus of U.S. Senate peech s. We found hat word2net outperforms popular embedding methods on predicting held-out words, and that sharing parameters based on pos further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. Related work. Word2net builds on word embeddings methods. Though originally designed as deep neural network rchitecture (Bengio et al., 2003; 2006; Mnih & Hinton, 2007), most applications of word embeddings now rely on log-bilinear models (Mikolov et al., 2013a;b;c; Pennington et al., 2014; Mnih & Teh, 2012; Mnih & Kavukcuoglu, 2013; Levy & Goldberg, 2014; Vilnis & McCallum, 2015; Barkan, 2016; Bamler &Mandt, 2017). The key innovation behind word2net is that it represents words with functions, instead of vectors (Rumelhart et al., 1986) or distributions (Vilnis & McCallum, 2015). Word2net keeps context vectors, but it replaces the embedding vector with a neural network. Previous work has also used deep neural networks for word embeddings (Bengio et al., 2003; 2006; Mnih & Hinton, 2007); these methods use a single network that outputs the unnormalized log probabilities for all words in the vocabulary. Word2net takes a different strategy: it has a separate network for each vocabulary word. Unlike the previous methods, word2net’s approach helps maintain the objective as a bank of binary classifiers, which allows for faster optimization of the networks. To develop word2net, we adopt the perspective of exponential family embeddings (Rudolph et al., 2016), which extend word embeddings to data beyond text. There are several extensions to exponential family embeddings (Rudolph & Blei, 2017; Rudolph et al., 2017; Liu & Blei, 2017), but they all have in common an exponential family likelihood whose natural parameter has a log-bilinear form. Word2net extends this framework to allow for non-linear relationships. Here we focus on Bernoulli embeddings, which are related to word embeddings with negative sampling, but our approach easily generalizes to other exponential family distributions (e.g., Poisson). Finally, word embeddings can capture semantic properties of the word, but they tend to neglect most of the syntactic information (Andreas & Klein, 2014). Word2net introduces a simple way to leverage the syntactic information to improve the quality of the word representations. 2 Word2Net In this section we develop word2net as a novel extension of Bernoulli embeddings (Rudolph et al., 2016). Bernoulli embeddings are a conditional model of text, closely related to word2vec. Specifically, they are related to continuous bag-of-words (cbow) with negative sampling.1 Wefirst reviewBernoulli embeddings and then we present word2net as a deep Bernoulli embedding model. 2.1 Background: Bernoulli embeddings Exponential family embeddings learn an embedding vector v 2 RK and a context vector ˛v 2 RK for each unique term in the vocabulary, v D 1; : : : ; V . These vectors encode the semantic properties of words, and they are used to parameterize the conditional probability of a word given its context. Specifically, let wn be the V -length one-hot vector indicating the word at location n, such that wnv D 1 for one term (vocabulary word) v, and let cn be the indices of the words in a fixed-sized window centered at location n (i.e., the indices of the context words). Exponential family embeddings parameterize the conditional probability of the target word given its context via a linear combination of the embedding vector and the context vectors, p.wnv j cn/ D Bernoulli . >v ˙n/ ; with ˙n , X v02cn ˛v0 : (1) Here, .x/ D 1 1Ce x is the sigmoid function, and we have introduced the notation ˙n for the sum of the context vectors at location n. Note that Eq. 1 does not impose the constraint that the sum over the vocabulary words P v p.wnv D 1 j cn/ must be 1. This significantly alleviates the computational complexity (Mikolov et al., 2013b; Rudolph et al., 2016). This type of exponential family embedding is called Bernoulli embedding, named for its conditional distribution. In Bernoulli embeddings, our goal is to learn the embedding vectors v and the context vectors ˛v from the text by maximizing the log probability of words given their contexts. The data contains N pairs .wn; cn/ of words and their contexts, and thus we can form the objective function L. ; ˛/ as the sum of logp.wnv j cn/ for all instances and vocabulary words. The resulting objective can be seen as a bank of V binary classifiers, where V is the vocabulary size. To see that, we make use of Eq. 1 and express the objective L. ; ˛/ as a sum over vocabulary words, L. ; ˛/ D NX nD1 VX vD1 logp.wnv j cn/ D VX vD1 0@ X nW wnvD1 log . >v ˙n/C X nW wnvD0 log . >v ˙n/ 1A : (2) If we hold all the context vectors ˛v fixed, then Eq. 2 is the objective of V independent logistic regressors, each predicting whether a word appears in a given context or it does not. The positive examples are those where word v actually appeared in a given context; the negative examples are those where v did not appear. It is the context vectors that couple the V binary classifiers together. In practice, we need to either downweight the contribution of the zeros in Eq. 2, or subsample the set of negative examples for each n (Rudolph et al., 2016). We follow the latter case here, which leads to negative sampling (Mikolov et al., 2013b). (See the connection in more detail in Appendix B.) 1See Appendix B for more details on the connections. 2.2 Word2Net as a deep Bernoulli embedding model Word2net replaces the linear classifiers in Eq. 2 with non-linear classifiers. In particular, we replace the linear combination >v ˙n with a neural network that is specific to each vocabulary word v, so that p.wnv D 1 j cn/ D f .˙nI ˇv/ ; (3) where f . I ˇv/ W RK ! R is a feed-forward neural network with parameters (i.e., weights and intercepts) ˇv . The number of neurons of the input layer is K, equal to the length of the context vectors ˛v . Essentially, we have replaced the per-term embedding vectors v with a per-term neural network ˇv . We refer to the per-term neural networks as word networks. The word2net objective is the sum of the log conditionals, Lword2net. ; ˛/ D VX vD1 0@ X nW wnvD1 log f .˙nI ˇv/ C X nW wnvD0 log f .˙nI ˇv/ 1A ; (4) where we choose the function f . I ˇv/ to be a three-layer neural network,2 h.1/nv D tanh ˙>n ˇ .1/ v ; h.2/nv D tanh .h.1/nv / >ˇ.2/v ; f .˙nI ˇv/ D .h .2/ nv / >ˇ.3/v : (5) Replacing vectors with neural networks has several implications. First, the bank of binary classifiers has additional model capacity to capture nonlinear relationships between the context and the cooccurrence probabilities. Specifically, each layer consecutively transforms the context to a different representation until the weight matrix at the last layer can linearly separate the real occurrences of the target word from the negative examples. Second, for a fixed dimensionality K, the resulting model has more parameters.3 This increases the model capacity, but it also increases the risk of overfitting. Indeed, we found that without extra regularization, the neural networks may easily overfit to the training data. We regularize the networks via either weight decay or parameter sharing (see below). In the empirical study of Section 3 we show that word2net fits text data better than its shallow counterparts and that it captures semantic similarities. Even for infrequent words, the learned semantic representations are meaningful. Third, we can exploit the hierarchical structure of the neural network representations via parameter sharing. Specifically, we can share the parameters of a specific layer of the networks of different words. This allows us to explicitly account for pos tags in our model (see below). Regularization through parameter sharing enables the use of pos tags. One way to regularize word2net is through parameter sharing. For parameter sharing, each word is assigned to one of T groups. Importantly, different occurrences of a term may be associated to different groups. We share specific layers of the word networks among words in the same group. In this paper, all neural network representations have 3 layers. We use index ` 2 f1; 2; 3g to denote the layer at which we apply the parameter sharing. Then, for each occurrence of term v in group t we set ˇ.`/v D ˇ.`/t . Consider now two extreme cases. First, for T D 1 group, we have a strong form of regularization by forcing all word networks to share the parameters of layer `. The number of parameters for layer ` has been divided by the vocabulary size, which implies a reduction in model complexity that might help prevent overfitting. This parameter sharing structure does not require side information and hence can be applied to any text corpus. In the second extreme case, each word is in its own group and T D V . This set-up recovers the model of Eqs. 4 and 5, which does not have parameter sharing. When we have access to a corpus annotated with pos tags, parameter sharing lets us use the pos information to improve the capability of word2net by capturing the semantic structure of the data. Andreas & Klein (2014) have shown that word embeddings do not necessarily encode much syntactic information, and it is still unclear how to use syntactic information to learn better word embeddings. The main issue is that many words can appear with different tags; for example, fish can be both a noun and refer to the animal or a verb and refer to the activity of catching the animal. On the one hand, both meanings are related. On the other hand, they may have differing profiles of which 2Three layers performed well in our experiments, allowing for parameter sharing to include pos tags. 3For fairness, in Section 3 we also compare to shallow models with the same number of parameters. contexts they appear in. Ideally, embedding models should be able to capture the difference. However, the simple approach of considering fish/noun and fish/verb as separate terms fails because there are few occurrences of each individual term/tag pair. (We show that empirically in Section 3.) Exploiting the hierarchical nature of the network representations of word2net, we incorporate pos information through parameter sharing as follows. Assume that for location n in the text we have a one-hot vector sn 2 f0; 1gT indicating the pos tag. To model the observation at position n, we use a neural network specific to that term/tag combination, p.wnv D 1; snt D 1 j cn/ D f ˙nI ˇ .:`/ v ; ˇ .`/ t : (6) That is, the neural network parameters are combined to form a neural network in which layer ` has parameters ˇ.`/t and the other layers have parameters ˇ .:`/ v . Thus, we leverage the information about the pos tag t by replacing ˇ.`/v with ˇ.`/t in layer `, resulting in pos parameter sharing at that layer. If the same term v appears at a different position n0 with a different pos tag t 0, at location n0 we replace the parameters ˇ.`/v of layer ` with ˇ.`/t 0 . Figure 1b illustrates pos parameter sharing at ` D 1. Even though now we have a function f . / for each term/tag pair, the number of parameters does not scale with the product V T ; indeed the number of parameters of the network with pos information is smaller than the number of parameters of the network without side information (Eq. 5). The reason is that the number of parameters necessary to describe one of the layers has been reduced from V to T due to parameter sharing (the other layers remain unchanged). Finally, note that we have some flexibility in choosing which layer is tag-specific and which layers are word-specific. We explore different combinations in Section 3, where we show that word2net with pos information improves the performance of word2net. The parameter sharing approach extends to side information beyond pos tags, as long as the words can be divided into groups, but we focus on parameter sharing across all words (T D 1) or across pos tags. Semantic similarity of word networks. In standard word embeddings, the default choice to compute semantic similarities between words is by cosine distances between the word vectors. Since word2net replaces the word vectors with word networks, we can no longer apply this default choice. We next describe the procedure that we use to compute semantic similarities between word networks. After fitting word2net, each word is represented by a neural network. Given that these networks parameterize functions, we design a metric that accounts for the fact that two functions are similar if they map similar inputs to similar outputs. So the intuition behind our procedure is as follows: we consider a set of K-dimensional inputs, we evaluate the output of each neural network on this set of inputs, and then we compare the outputs across networks. For the inputs, we choose the V context vectors, which we stack together into a matrix ˛ 2 RV K . We evaluate each network f . / row-wise on ˛ (i.e., feeding each ˛v as a K-dimensional input to obtain a scalar output), obtaining a V -dimensional summary of where the network f . / maps the inputs. Finally, we use the cosine distance of the outputs to compare the outputs across networks. In summary, we obtain the similarity of two words w and v as dist .w; v/ D f .˛I ˇw/ >f .˛I ˇv/ jjf .˛I ˇw/jj2 jjf .˛I ˇv/jj2 : (7) If we are using parameter sharing, we can also compare pos-tagged words; e.g., we may ask how similar is fish/noun to fish/verb. The two combinations will have different representations under the word2net method trained with pos-tag sharing. Assuming that layer ` is the shared layer, we compute the semantic similarity between the word/tag pair Œw; t and the pair Œv; s as dist.Œw; t ; Œv; s / D f .˛I ˇ .:`/ w ; ˇ .`/ t / >f .˛I ˇ .:`/ v ; ˇ .`/ s / jjf .˛I ˇ .:`/ w ; ˇ .`/ t /jj2 jjf .˛I ˇ .:`/ v ; ˇ .`/ s /jj2 : (8) 3 Empirical results In this section we study the performance of word2net on two datasets, Wikipedia articles and Senate speeches. We show that word2net fits held-out data better than existing models and that the learned network representations capture semantic similarities. Our results also show that word2net is superior at incorporating syntactic information into the model, which improves both the predictions and the quality of the word representations. Data. We use word2net to study two data sets, both with and without pos tags: Wikipedia: The text8 corpus is a collection of Wikipedia articles, containing 17M words. We form a vocabulary with the 15K most common terms, replacing less frequent terms with the unknown token. We annotate text8 using the nltk pos tagger and the universal tagset.4 Table 7 in Appendix C shows a description of the tagset. We also form a tagged dataset in which each term/tag combination has a unique token, resulting in a vocabulary of 49K tagged terms. Senate speeches: These are the speeches given in the U.S. Senate in the years 1916-2009. The data is a transcript of spoken language and contains 24M words. Similarly as above, we form a vocabulary of 15K terms. We annotate the text using the Stanford CoreNLP pos tagger (Manning et al., 2014), and we map the tags to the universal tagset. We form a tagged dataset with 38K tagged terms. Table 1 summarizes the information about both corpora. We split each dataset into a training, a validation, and a test set, which respectively contain 90%, 5%, and 5% of the words. Additional details on preprocessing are in Appendix C. Methods. We compare word2net to its shallow counterpart, the cbow model (Mikolov et al., 2013b), which is equivalent to Bernoulli embeddings (b-emb)5 (Rudolph et al., 2016). We also compare with the skip-gram model.6 (Mikolov et al., 2013b) We run b-emb/cbow and skip-gram on the data and also on the augmented data of pos-tagged terms. In detail, the methods we compare are: b-emb/cbow: Learns vector representations for each word (or tagged word) by optimizing Eq. 2. Skip-gram: Learns vector representations for each word (or tagged word) by optimizing Eq. 12. Word2net: Learns a neural network representation for each word by optimizing Eq. 4. We study the following parameter sharing schemes: 1. pos pos pos all all all all all all : no parameter sharing. 2. pos pos pos all all all all all all : layer ` shared between all networks. 3. s pos all all all all all all : layer ` shared between terms with the same part-of-speech (pos) tag. For word2net, we experiment with the context dimensions K 2 f20; 100g. The context dimension is also the dimension of the input layer. For K D 20, we useH1 D 10 hidden units in the first hidden layer of each word network and H2 D 10 hidden units in the second layer. For K D 100, we use H1 D H2 D 20 hidden units. Without parameter sharing, the number of parameters per word is K C KH1 CH1H2 CH2. The shallow models have 2K parameters per term (the entries of the context nd word vectors). Since we want to compare models both in terms of context dimension K and in terms of total parameters, we fit the methods with K 2 f20; 165; 100; 1260g. We experiment with context sizes jcnj 2 f2; 4; 8g and we train all methods using stochastic gradient descent (sgd) (Robbins & Monro, 1951) with jSnj D 10 negative samples on the Wikipedia data and with jSnj D 20 negative samples on the Senate speeches. We use l2 regularization with standard deviation 10 for the word and context vectors, as well as weight decay for the neural networks. We use Adam (Kingma & Ba, 2015) with Tensorflow’s default settings (Abadi et al., 2016) to train all methods for up to 30000 iterations, using a minibatch size of 4069 or 1024. We assess convergence by monitoring the loss on a held-out validation set every 50 iterations, and we stop training when the average validation loss starts increasing. We initialize and freeze the context vectors of the word2net methods with the context vectors from a pretrained Bernoulli embedding with the same context dimension K. Network parameters are initialized according to standard initialization schemes of 4See http://nltk.org. 5See Appendix B for the detailed relationship between b-emb and cbow with negative sampling. 6The skip-gram objective is related to cbow/b-emb through Jensen’s inequality (see Appendix B). Table 2: Word2net outperforms existing word embedding models (skip-gram and b-emb/cbow) in terms of test log-likelihood on the Wikipedia data, both with and without pos tags. We compare models with the same context dimensionK and the same total number of parameters p=V for different context sizes (cs). (Results on more configurations are in Appendix A.) For word2net, we study different parameter sharing schemes, and the color coding indicates which layer is shared and how, as in Figure 1. Parameter sharing improves the performance of word2net, especially with pos tags. vocabulary K p=V cs 2 cs 4 cs 8 Mikolov et al. (2013b): skip-gram words 20 40 1:061 1:062 1:071 skip-gram tagged words 20 240 2:994 3:042 3:042 Mikolov et al. (2013b); Rudolph et al. (2016): b-emb/cbow words 20 40 1:023 0:976 0:941 b-emb/cbow words 165 330 1:432 1:388 1:381 b-emb/cbow tagged words 20 240 1:411 1:437 1:461 this work: sharing word2net pos pos pos all all all all all all 20 330 0:940 0:912 0:937 word2net pos pos pos all all all 20 120 1:040 1:003 0:964 word2net p s p s p s ll ll ll all all all 20 230 1:191 1:141 1:111 word2net pos pos pos all all all ll ll ll 20 320 0:863 0:881 0:890 word2net pos pos pos all all all all all all 20 120 0:918 0:914 0:871 word2net s s s ll ll ll ll 20 230 0:844 0:801 0:793 word2net pos pos pos all all all all all all 20 320 0:840 0:822 0:862 feed-forward neural networks (Glorot & Bengio, 2010), i.e., the weights are initialized from a uniform distribution with bounds˙ p 6= p Hin CHout. Quantitative results: Word2net has better predictive performance. We compute the predictive log-likelihood of the words in the test set, logp.wnv j cn/. For skip-gram, which was trained to predict the context words from the target, we average the context vectors ˛v for a fair comparison.7 Table 2 shows the results for the Wikipedia dataset. We explore different model sizes: with the same number of parameters as word2net, and with the same dimensionality K of the context vectors. For word2net, we explore different parameter sharing approaches. Table 5 in Appendix A shows the results for other model sizes (includingK D 100). In both tables, word2net without parameter sharing performs at least as good as the shallow models. Importantly, the performance of word2net improves with parameters sharing, and it outperforms the other methods. Tables 2 and 5 also show that b-emb/cbow and skip-gram perform poorly when we incorporate pos information by considering an augmented vocabulary of tagged words. The reason is that each term becomes less frequent, and these approaches would require more data to capture the cooccurrence patterns of tagged words. In contrast, word2net with pos parameter sharing provides the best predictions across all methods (including other versions of word2net). Finally, Table 6 in Appendix A shows the predictive performance for the U.S. Senate speeches. On this corpus, skip-gram performs better than b-emb/cbow and word2net without parameter sharing; however, word2net with pos sharing also provides the best predictions across all methods. Qualitative results: Word2net captures similarities and leverages syntactic information. Table 3 displays the similarity between word networks (trained on Wikipedia with parameter sharing at layer ` D 1), compared to the similarities captured by word embeddings (b-emb/cbow). For each query word, we list the three most similar terms, according to the learned representations. The word vectors are compared using cosine similarity, while the word networks are compared using Eq. 7. The table shows that word2net can capture latent semantics, even for less frequent words such as parrot. Table 4 shows similarities of models trained on the Senate speeches. In particular, the table compares: b-emb/cbow without pos information, b-emb/cbow trained on the augmented vocabulary of tagged words, and word2net with pos parameter sharing at the input layer (` D 1). We use Eq. 8 to compute the similarity across word networks with pos sharing. We can see that word2net is superior at incorporating syntactic information into the learned representations. For example, the most similar 7If we do not average, the held-out likelihood of skip-gram becomes worse. networks to the pronoun me are other pronouns such as myself, my, and himself. Word networks are often similar to other word networks with the same pos tag, but we also see some variation. One such example is in Figure 1c, which shows that the list of the 10 most similar words to the verb increase contains the adjective half. 4 Discussion We have presented word2net, a method for learning neural network representations of words. The word networks are used to predict the occurrence of words in small context windows and improve prediction accuracy over existing log-bilinear models. We combine the context vectors additively, but this opens the door for future research directions in which we explore other ways of combining the context information, such as accounting for the order of the context words and their pos tags. We have also introduced parameter sharing as a way to share statistical strength across groups of words and we have shown empirically that it improves the performance of word2net. Another opportunity for future work is to explore other types of parameter sharing besides pos sharing, such as sharing layers across documents or learning a latent group structure together with the word networks. A Additional results For completeness, we show here some additional results that we did not include in the main text for space constraints. In particular, Table 5 compares the test log-likelihood of word2net with the competing models— namely, skip-gram and b-emb/cbow. All methods are trained with negative sampling, as described in the main text. This table shows the results for the Wikipedia dataset, similarly to Table 2, but it includes other model sizes (i.e., another value of K). In this table, word2net with no parameter sharing performs similarly to b-emb/cbow with the same number of parameters, but its performance can be further improved with part-of-speech (pos) parameter sharing. Table 6 shows the test log-likelihood for the U.S. Senate speeches. Here, skip-gram is the best method that does not use pos tags, but it is outperformed by word2net with pos parameter sharing. all all all all ll ll B Relation between Bernoulli embeddings and word2vec Word2vec (Mikolov et al., 2013b) is one of the most widely used method for learning vector representations of words. There are multiple ways to implement word2vec. First, there is a choice of the objective. Second, there are several ways of how to approximate the objective to get a scalable algorithm. In this section, we describe the two objectives, continuous bag-of-words (cbow) and skip-gram, and we focus on negative sampling as the method of choice to achieve scalability. We describe the similarities and differences between Bernoulli embeddings (Rudolph et al., 2016) and these two objectives. In summary, under certain assumptions Bernoulli embeddings are equivalent to cbow with negative sampling, and are related to skip-gram through Jensen’s inequality. b-emb cbow (negative sampling) First we explain how Bernoulli embeddings and cbow with negative sampling are related. Consider the Bernoulli embedding full objective, L. ; ˛/ D X n 0@ X vW wnvD1 log . >v ˙n/C X vW wnvD0 log . >v ˙n/ 1A : (9) In most cases, the summation over negative examples (wnv D 0) is computationally expensive to compute. To address that, we form an unbiased estimate of that term by subsampling a random set Sn Table 6: Comparison of the test log-likelihood across different models on the Senate speeches. We compare models with the same context dimension K and the same total number of parameters p=V for different context sizes (“cs”). For word2net, we explore different parameter sharing schemes. The color coding of the parameter sharing (same as Figure 1) indicates which layer is shared and how. vocabulary K p=V cs 2 cs 4 cs 8 Mikolov et al. (2013b): skip-gram words 20 40 1:052 1:080 1:061 skip-gram tagged words 20 240 1:175 1:199 1:227 Mikolov et al. (2013b); Rudolph et al. (2016): b-emb/cbow words 20 40 1:274 1:246 1:222 b-emb/cbow tagged words 20 240 1:352 1:340 1:339 b-emb/cbow words 165 330 1:735 1:734 1:744 this work: sharing word2net pos pos pos all all all all all all 20 330 1:406 1:555 1:401 word2net pos pos pos all all all 20 120 1:276 1:256 1:243 word2net p s p s p s ll ll ll all all all 20 230 1:462 1:435 1:413 word2net pos pos pos all all all all all all 20 120 0:873 0:860 0:850 word2net s s s ll ll ll ll 20 230 1:057 1:034 1:015 of terms and rescaling by V 1 jSnj , bL. ; ˛/ DX n 0@ X vW wnvD1 log . >v ˙n/C V 1 jSnj X v2Sn log . >v ˙n/ 1A : (10) Here, we have introduced an auxiliary coefficient . The estimate is unbiased only for D 1; however, Rudolph et al. (2016) showed that downweighting the contribution of the zeros works better in practice.8 In particular, if we set the downweight factor as D jSnj V 1 , we recover the objective of cbow with negative sampling, bL. ; ˛/ DX n 0@ X vW wnvD1 log . >v ˙n/C X v2Sn log . >v ˙n/ 1A LCBOW. ; ˛/ (11) There are two more subtle theoretical differences between both. The first difference is that Bernoulli embeddings include a regularization term for the embedding vectors, whereas cbow does not. The second difference is that, in Bernoulli embeddings, we need to draw a new set of negative samples Sn at each iteration of the gradient ascent algorithm (because we form a noisy estimator of the downweighted objective). In contrast, in cbow with negative sampling, the samples Sn are drawn once in advance and then hold fixed. In practice, for large datasets, we have not observed significant differences in the performance of both approaches. For simplicity, we draw the negative samples Sn only once. cbow (negative sampling) skip-gram (negative sampling) Now we show how cbow and skip-gram are related (considering negative sampling for both). Recall that the objective of cbow is to predict a target word from its context, while the skip-gram objective is to predict the context from the target word. Negative sampling breaks the multi-class constraint that the sum of the probability of each word must equal one, and instead models probabilities of the individual entries of the one-hot vectors representing the words. When we apply negative sampling, the cbow objective becomes Eq. 11. The skip-gram objective is given by Lskip-gram. ; ˛/ D X .n;v/W wnvD1 0@X v02cn log >v ˛v0 C X v02Sn log >v ˛v0 1A ; (12) 8This is consistent with the approaches in recommender systems (Hu et al., 2008). That is, for each target term wnv , the cbow objective has one term while the skip-gram objective has jcnj terms. Consider a term .n; v/ for which wnv D 1. We take the corresponding cbow term from Eq. 11 and we apply Jensen’s inequality to obtain the corresponding skip-gram term in Eq. 12: log . >v ˙n/ D log 0@ >v X v02cn ˛v0 1A X v02cn log >v ˛v0 : (13) Here, we have made use of the concavity of the log . / function. In general, this is a consequence of the convexity of the log-normalizer of the (Bernoulli) exponential family distribution. This holds for the “positive” examples wnv . As for the negative examples (wnv D 0), the comparison is not as straightforward, because the choice of terms in Eqs. 11 and 12 is not exactly the same. In particular, Eq. 11 holds v0 fixed and draws v from the noise distribution, while Eq. 12 holds v fixed and draws v0 from the noise distribution. C Data preprocessing In this paper we study Wikipedia articles (text8) and a corpus of U.S. Senate speeches. On both corpora, we restrict the vocabulary to the 15K most frequent words, replacing all the remaining words with a designated token. We annotate the data using nltk tagger9 or the Stanford CoreNLP tagger (Manning et al., 2014), using the universal tagset shown in Table 7. The Senate speeches contain a lot of boilerplate repetitive language; for this reason, we tokenize around 350 frequent phrases, such as senator from alabama or united states, considering the entire phrase an individual vocabulary term. We apply the pos tagger before this tokenization step, and then we assign the noun tag to all phrases. We split the data into training (90%), testing (5%), and validation (5%) sets. We use the validation set to assess convergence, as explained in the main text. We subsample the frequent words following Mikolov et al. (2013b); i.e., each word wn in the training set is discarded with probability Prob.wn is discarded/ D 1 s t frequency.wn/ ; (14) where frequency.wn/ denotes the frequency of word wn, and t D 10 5. For each method, we use jSnj D 10 negative samples on the Wikipedia articles and jSnj D 20 negative samples on the Senate speeches. Following Mikolov et al. (2013b), we draw the negative samples from the unigram distribution raised to the power of 0:75. 9See http://nltk.org
1. What is the novel approach presented in the paper for learning vector representations of words? 2. What is the advantage of using non-linear combinations of context vectors in the proposed method? 3. How does the proposed method incorporate additional context information, such as POS tags, into word vector learning? 4. Why does the reviewer find the experimental section weak? 5. What are some common evaluation tasks for word vectors, and why does the reviewer believe they are important?
Review
Review The paper presents a method to use non-linear combination of context vectors for learning vector representation of words. The main idea is to replace each word embedding by a neural network, which scores how likely is the current word given the context words. This also allowed them to use other context information (like POS tags) for word vector learning. I like the approach, although not being an expert in the area, cannot comment on whether there are existing approaches for similar objectives. I think the experimental section is weak. Most work on word vectors are evaluated on several word similarity and analogy tasks (See the Glove paper). However, this paper only reports numbers on the task of predicting next word. Response to rebuttal: I am still not confident about the evaluation. I feel word vectors should definitely be tested on similarity tasks (if not analogy). As a result, I am keeping my score the same.
ICLR
Title Word2net: Deep Representations of Language Abstract Word embeddings extract semantic features of words from large datasets of text. Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words. Here we propose word2net, a method that replaces their linear parametrization with neural networks. For each term in the vocabulary, word2net posits a neural network that takes the context as input and outputs a probability of occurrence. Further, word2net can use the hierarchical organization of its word networks to incorporate additional meta-data, such as syntactic features, into the embedding model. For example, we show how to share parameters across word networks to develop an embedding model that includes part-of-speech information. We study word2net with two datasets, a collection of Wikipedia articles and a corpus of U.S. Senate speeches. Quantitatively, we found that word2net outperforms popular embedding methods on predicting heldout words and that sharing parameters based on part of speech further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. 1 Introduction Word embeddings are an important statistical tool for analyzing language, processing large datasets of text to learn meaningful vector representations of the vocabulary (Bengio et al., 2003; 2006; Mikolov et al., 2013b; Pennington et al., 2014). Word embeddings rely on the distributional hypothesis, that words used in the same contexts tend to have similar meanings (Harris, 1954). More informally (but equally accurate), a word is defined by the company it keeps (Firth, 1957). While there are many extensions and variants of embeddings, most rely on a log-bilinear model. This model posits that each term is associated with an embedding vector and a context vector. Given a corpus of text, these vectors are fit to maximize an objective function that involves the inner product of each observed word’s embedding with the sum of the context vectors of its surrounding words. With useful ways to handle large vocabularies, such as negative sampling (Mikolov et al., 2013a) or Bernoulli embeddings (Rudolph et al., 2016), the word embedding objective resembles a bank of coupled linear binary classifiers. Here we introduce word2net, a word embedding method that relaxes this linear assumption. Word2net still posits a context vector for each term, but it replaces each word vector with a term-specific neural network. This word network takes in the sum of the surrounding context vectors and outputs the occurrence probability of the word. The word2net objective involves the output of each word’s network evaluated with its surrounding words as input. The word2net objective resembles a bank of coupled non-linear binary classifiers. How does word2net build on classical word embeddings? The main difference is that the word networks can capture non-linear interaction effects between co-occurring words; this leads to a better model of language. Furthermore, the word networks enable us to share per-term parameters based on word-level meta-data, such as syntactic information. Here we study word2net models that share parameters based on part-of-speech (pos) tags, where the parameters of certain layers of each network are shared by all terms tagged with the same pos tag. Figure 1a illustrates the intuition behind word2net. Consider the term increase. The top of the figure shows one observation of the word, i.e., one of the places in which it appears in the data. (This excerpt is from U.S. Senate speeches.) From this observation, the word2net objective contains the probability of a binary variable wn;increase conditional on its context (i.e., the sum of the context vectors of the surrounding words). This variable is whether increase occurred at position n. Under review as a conference paper at ICLR 2018 neural network that outputs the probability of that word (Figure 1a). If we are given the tags of the words, we may use parameter sharing instead in order to form a per-word per-tag neural network (Figure 1b). Finally, we also propose a method for computing similarities between the neural network representations of the words and demonstrate that they capture semantic (and even syntactic) similarities (Figure 1c). In our empirical study, we show that parameter sharing in word2net performs better than applying word2vec or standard Benoulli embeddings on the augmented vocabulary of word/tag pairs. We also demonstrate that deep Bernoulli embeddings provide better predictive log-likelihood when compared to word2vec or standard Bernoulli embeddings. R fjrr: moved this to the introduction, needs rewriting here Word embedding models learn semantic features of words by exploiting the co-occurrence patterns of words in a collection of documents. There are many extensions and variants of word embeddings (Bengio et al., 2003; 2006; Mnih & Hinton, 2007; Mikolov et al., 2013a;b;c; Pennington et al., 2014; Mnih & Teh, 2012; Mnih & Kavukcuoglu, 2013; Levy & Goldberg, 2014; Vilnis & McCallum, 2015; Barkan, 2016; Bamler & Mandt, 2017). Most of these approaches rely on a log-bilinear model, in which the emission probabilities depend on a dot product of the word embedding vectors and the context vectors, as opposed to the deep neural network architectures proposed by Bengio et al. (2003; 2006) and Mnih & Hinton (2007). Our model di ers from these deep neural network architectures in two ways. First, we have a separate network for each vocabulary word, instead of a single network that outputs the logits for all words in the vocabulary. Our perspective of a bank of parallel binary classification problems allows for faster optimization of the networks. Second, our architecture enables incorporating side information (such as part of speech tags) in specific layers of the network. Recall that word embeddings (without any further structure) tend to capture semantic properties of the words, and the syntactic properties they encode are typically redundant (Andreas & Klein, 2014), so there is room for improvement with a model that allows for additional syntactic structure. We adopt the perspective of exponential family embeddings (Rudolph et al., 2016), which extend word embeddings to datasets beyond text. There are also some variants and extensions of exponential family embeddings (Rudolph & Blei, 2017; Rudolph et al., 2017; Liu & Blei, 2017; Liu et al., 2017), but they all have in common an exponential family likelihood whose natural parameter is determined 2 The idea behind word2net is hat the conditio al probability ofwn;increase is the output of a multi-layer network that takes the context as input. Each layer of the network transforms the context into a new hidden representation, reweighting t e latent feature according to their relevance for predicting the occurrence of increa e. Note that not illustrated are th 0-vari bles, i.e., the negative samples, which correspond to words that are not at position n. In word2net, their probabilities also come from their corresponding word networks. Now suppose we have tagged the corpus with pos. Figure 1b shows how to incorporate this syntactic information into word2net. The network is specific to increase as a noun (as opposed to a verb). The paramete s of the fir layer ( range) ar hared among all nouns i the coll ction; the other layers (blue) are specific to increase. Thus, the networks for increase/nou and increas /verb differ in how the first layer promotes the latent aspects of the context, i.e., according to which context features are more relevant for each pos tag. This model further lets us consider these two pos tags separately. Figure 1c shows the most similar words to each sense of increase; the method correctly picks out tagged words related to the ver and rela ed to the noun. Below, we develop th details of word2net a d study its performance with two datasets, a coll ction of Wikipedia articles and a corpus of U.S. Senate peech s. We found hat word2net outperforms popular embedding methods on predicting held-out words, and that sharing parameters based on pos further boosts performance. Qualitatively, word2net learns interpretable semantic representations and, compared to vector-based methods, better incorporates syntactic information. Related work. Word2net builds on word embeddings methods. Though originally designed as deep neural network rchitecture (Bengio et al., 2003; 2006; Mnih & Hinton, 2007), most applications of word embeddings now rely on log-bilinear models (Mikolov et al., 2013a;b;c; Pennington et al., 2014; Mnih & Teh, 2012; Mnih & Kavukcuoglu, 2013; Levy & Goldberg, 2014; Vilnis & McCallum, 2015; Barkan, 2016; Bamler &Mandt, 2017). The key innovation behind word2net is that it represents words with functions, instead of vectors (Rumelhart et al., 1986) or distributions (Vilnis & McCallum, 2015). Word2net keeps context vectors, but it replaces the embedding vector with a neural network. Previous work has also used deep neural networks for word embeddings (Bengio et al., 2003; 2006; Mnih & Hinton, 2007); these methods use a single network that outputs the unnormalized log probabilities for all words in the vocabulary. Word2net takes a different strategy: it has a separate network for each vocabulary word. Unlike the previous methods, word2net’s approach helps maintain the objective as a bank of binary classifiers, which allows for faster optimization of the networks. To develop word2net, we adopt the perspective of exponential family embeddings (Rudolph et al., 2016), which extend word embeddings to data beyond text. There are several extensions to exponential family embeddings (Rudolph & Blei, 2017; Rudolph et al., 2017; Liu & Blei, 2017), but they all have in common an exponential family likelihood whose natural parameter has a log-bilinear form. Word2net extends this framework to allow for non-linear relationships. Here we focus on Bernoulli embeddings, which are related to word embeddings with negative sampling, but our approach easily generalizes to other exponential family distributions (e.g., Poisson). Finally, word embeddings can capture semantic properties of the word, but they tend to neglect most of the syntactic information (Andreas & Klein, 2014). Word2net introduces a simple way to leverage the syntactic information to improve the quality of the word representations. 2 Word2Net In this section we develop word2net as a novel extension of Bernoulli embeddings (Rudolph et al., 2016). Bernoulli embeddings are a conditional model of text, closely related to word2vec. Specifically, they are related to continuous bag-of-words (cbow) with negative sampling.1 Wefirst reviewBernoulli embeddings and then we present word2net as a deep Bernoulli embedding model. 2.1 Background: Bernoulli embeddings Exponential family embeddings learn an embedding vector v 2 RK and a context vector ˛v 2 RK for each unique term in the vocabulary, v D 1; : : : ; V . These vectors encode the semantic properties of words, and they are used to parameterize the conditional probability of a word given its context. Specifically, let wn be the V -length one-hot vector indicating the word at location n, such that wnv D 1 for one term (vocabulary word) v, and let cn be the indices of the words in a fixed-sized window centered at location n (i.e., the indices of the context words). Exponential family embeddings parameterize the conditional probability of the target word given its context via a linear combination of the embedding vector and the context vectors, p.wnv j cn/ D Bernoulli . >v ˙n/ ; with ˙n , X v02cn ˛v0 : (1) Here, .x/ D 1 1Ce x is the sigmoid function, and we have introduced the notation ˙n for the sum of the context vectors at location n. Note that Eq. 1 does not impose the constraint that the sum over the vocabulary words P v p.wnv D 1 j cn/ must be 1. This significantly alleviates the computational complexity (Mikolov et al., 2013b; Rudolph et al., 2016). This type of exponential family embedding is called Bernoulli embedding, named for its conditional distribution. In Bernoulli embeddings, our goal is to learn the embedding vectors v and the context vectors ˛v from the text by maximizing the log probability of words given their contexts. The data contains N pairs .wn; cn/ of words and their contexts, and thus we can form the objective function L. ; ˛/ as the sum of logp.wnv j cn/ for all instances and vocabulary words. The resulting objective can be seen as a bank of V binary classifiers, where V is the vocabulary size. To see that, we make use of Eq. 1 and express the objective L. ; ˛/ as a sum over vocabulary words, L. ; ˛/ D NX nD1 VX vD1 logp.wnv j cn/ D VX vD1 0@ X nW wnvD1 log . >v ˙n/C X nW wnvD0 log . >v ˙n/ 1A : (2) If we hold all the context vectors ˛v fixed, then Eq. 2 is the objective of V independent logistic regressors, each predicting whether a word appears in a given context or it does not. The positive examples are those where word v actually appeared in a given context; the negative examples are those where v did not appear. It is the context vectors that couple the V binary classifiers together. In practice, we need to either downweight the contribution of the zeros in Eq. 2, or subsample the set of negative examples for each n (Rudolph et al., 2016). We follow the latter case here, which leads to negative sampling (Mikolov et al., 2013b). (See the connection in more detail in Appendix B.) 1See Appendix B for more details on the connections. 2.2 Word2Net as a deep Bernoulli embedding model Word2net replaces the linear classifiers in Eq. 2 with non-linear classifiers. In particular, we replace the linear combination >v ˙n with a neural network that is specific to each vocabulary word v, so that p.wnv D 1 j cn/ D f .˙nI ˇv/ ; (3) where f . I ˇv/ W RK ! R is a feed-forward neural network with parameters (i.e., weights and intercepts) ˇv . The number of neurons of the input layer is K, equal to the length of the context vectors ˛v . Essentially, we have replaced the per-term embedding vectors v with a per-term neural network ˇv . We refer to the per-term neural networks as word networks. The word2net objective is the sum of the log conditionals, Lword2net. ; ˛/ D VX vD1 0@ X nW wnvD1 log f .˙nI ˇv/ C X nW wnvD0 log f .˙nI ˇv/ 1A ; (4) where we choose the function f . I ˇv/ to be a three-layer neural network,2 h.1/nv D tanh ˙>n ˇ .1/ v ; h.2/nv D tanh .h.1/nv / >ˇ.2/v ; f .˙nI ˇv/ D .h .2/ nv / >ˇ.3/v : (5) Replacing vectors with neural networks has several implications. First, the bank of binary classifiers has additional model capacity to capture nonlinear relationships between the context and the cooccurrence probabilities. Specifically, each layer consecutively transforms the context to a different representation until the weight matrix at the last layer can linearly separate the real occurrences of the target word from the negative examples. Second, for a fixed dimensionality K, the resulting model has more parameters.3 This increases the model capacity, but it also increases the risk of overfitting. Indeed, we found that without extra regularization, the neural networks may easily overfit to the training data. We regularize the networks via either weight decay or parameter sharing (see below). In the empirical study of Section 3 we show that word2net fits text data better than its shallow counterparts and that it captures semantic similarities. Even for infrequent words, the learned semantic representations are meaningful. Third, we can exploit the hierarchical structure of the neural network representations via parameter sharing. Specifically, we can share the parameters of a specific layer of the networks of different words. This allows us to explicitly account for pos tags in our model (see below). Regularization through parameter sharing enables the use of pos tags. One way to regularize word2net is through parameter sharing. For parameter sharing, each word is assigned to one of T groups. Importantly, different occurrences of a term may be associated to different groups. We share specific layers of the word networks among words in the same group. In this paper, all neural network representations have 3 layers. We use index ` 2 f1; 2; 3g to denote the layer at which we apply the parameter sharing. Then, for each occurrence of term v in group t we set ˇ.`/v D ˇ.`/t . Consider now two extreme cases. First, for T D 1 group, we have a strong form of regularization by forcing all word networks to share the parameters of layer `. The number of parameters for layer ` has been divided by the vocabulary size, which implies a reduction in model complexity that might help prevent overfitting. This parameter sharing structure does not require side information and hence can be applied to any text corpus. In the second extreme case, each word is in its own group and T D V . This set-up recovers the model of Eqs. 4 and 5, which does not have parameter sharing. When we have access to a corpus annotated with pos tags, parameter sharing lets us use the pos information to improve the capability of word2net by capturing the semantic structure of the data. Andreas & Klein (2014) have shown that word embeddings do not necessarily encode much syntactic information, and it is still unclear how to use syntactic information to learn better word embeddings. The main issue is that many words can appear with different tags; for example, fish can be both a noun and refer to the animal or a verb and refer to the activity of catching the animal. On the one hand, both meanings are related. On the other hand, they may have differing profiles of which 2Three layers performed well in our experiments, allowing for parameter sharing to include pos tags. 3For fairness, in Section 3 we also compare to shallow models with the same number of parameters. contexts they appear in. Ideally, embedding models should be able to capture the difference. However, the simple approach of considering fish/noun and fish/verb as separate terms fails because there are few occurrences of each individual term/tag pair. (We show that empirically in Section 3.) Exploiting the hierarchical nature of the network representations of word2net, we incorporate pos information through parameter sharing as follows. Assume that for location n in the text we have a one-hot vector sn 2 f0; 1gT indicating the pos tag. To model the observation at position n, we use a neural network specific to that term/tag combination, p.wnv D 1; snt D 1 j cn/ D f ˙nI ˇ .:`/ v ; ˇ .`/ t : (6) That is, the neural network parameters are combined to form a neural network in which layer ` has parameters ˇ.`/t and the other layers have parameters ˇ .:`/ v . Thus, we leverage the information about the pos tag t by replacing ˇ.`/v with ˇ.`/t in layer `, resulting in pos parameter sharing at that layer. If the same term v appears at a different position n0 with a different pos tag t 0, at location n0 we replace the parameters ˇ.`/v of layer ` with ˇ.`/t 0 . Figure 1b illustrates pos parameter sharing at ` D 1. Even though now we have a function f . / for each term/tag pair, the number of parameters does not scale with the product V T ; indeed the number of parameters of the network with pos information is smaller than the number of parameters of the network without side information (Eq. 5). The reason is that the number of parameters necessary to describe one of the layers has been reduced from V to T due to parameter sharing (the other layers remain unchanged). Finally, note that we have some flexibility in choosing which layer is tag-specific and which layers are word-specific. We explore different combinations in Section 3, where we show that word2net with pos information improves the performance of word2net. The parameter sharing approach extends to side information beyond pos tags, as long as the words can be divided into groups, but we focus on parameter sharing across all words (T D 1) or across pos tags. Semantic similarity of word networks. In standard word embeddings, the default choice to compute semantic similarities between words is by cosine distances between the word vectors. Since word2net replaces the word vectors with word networks, we can no longer apply this default choice. We next describe the procedure that we use to compute semantic similarities between word networks. After fitting word2net, each word is represented by a neural network. Given that these networks parameterize functions, we design a metric that accounts for the fact that two functions are similar if they map similar inputs to similar outputs. So the intuition behind our procedure is as follows: we consider a set of K-dimensional inputs, we evaluate the output of each neural network on this set of inputs, and then we compare the outputs across networks. For the inputs, we choose the V context vectors, which we stack together into a matrix ˛ 2 RV K . We evaluate each network f . / row-wise on ˛ (i.e., feeding each ˛v as a K-dimensional input to obtain a scalar output), obtaining a V -dimensional summary of where the network f . / maps the inputs. Finally, we use the cosine distance of the outputs to compare the outputs across networks. In summary, we obtain the similarity of two words w and v as dist .w; v/ D f .˛I ˇw/ >f .˛I ˇv/ jjf .˛I ˇw/jj2 jjf .˛I ˇv/jj2 : (7) If we are using parameter sharing, we can also compare pos-tagged words; e.g., we may ask how similar is fish/noun to fish/verb. The two combinations will have different representations under the word2net method trained with pos-tag sharing. Assuming that layer ` is the shared layer, we compute the semantic similarity between the word/tag pair Œw; t and the pair Œv; s as dist.Œw; t ; Œv; s / D f .˛I ˇ .:`/ w ; ˇ .`/ t / >f .˛I ˇ .:`/ v ; ˇ .`/ s / jjf .˛I ˇ .:`/ w ; ˇ .`/ t /jj2 jjf .˛I ˇ .:`/ v ; ˇ .`/ s /jj2 : (8) 3 Empirical results In this section we study the performance of word2net on two datasets, Wikipedia articles and Senate speeches. We show that word2net fits held-out data better than existing models and that the learned network representations capture semantic similarities. Our results also show that word2net is superior at incorporating syntactic information into the model, which improves both the predictions and the quality of the word representations. Data. We use word2net to study two data sets, both with and without pos tags: Wikipedia: The text8 corpus is a collection of Wikipedia articles, containing 17M words. We form a vocabulary with the 15K most common terms, replacing less frequent terms with the unknown token. We annotate text8 using the nltk pos tagger and the universal tagset.4 Table 7 in Appendix C shows a description of the tagset. We also form a tagged dataset in which each term/tag combination has a unique token, resulting in a vocabulary of 49K tagged terms. Senate speeches: These are the speeches given in the U.S. Senate in the years 1916-2009. The data is a transcript of spoken language and contains 24M words. Similarly as above, we form a vocabulary of 15K terms. We annotate the text using the Stanford CoreNLP pos tagger (Manning et al., 2014), and we map the tags to the universal tagset. We form a tagged dataset with 38K tagged terms. Table 1 summarizes the information about both corpora. We split each dataset into a training, a validation, and a test set, which respectively contain 90%, 5%, and 5% of the words. Additional details on preprocessing are in Appendix C. Methods. We compare word2net to its shallow counterpart, the cbow model (Mikolov et al., 2013b), which is equivalent to Bernoulli embeddings (b-emb)5 (Rudolph et al., 2016). We also compare with the skip-gram model.6 (Mikolov et al., 2013b) We run b-emb/cbow and skip-gram on the data and also on the augmented data of pos-tagged terms. In detail, the methods we compare are: b-emb/cbow: Learns vector representations for each word (or tagged word) by optimizing Eq. 2. Skip-gram: Learns vector representations for each word (or tagged word) by optimizing Eq. 12. Word2net: Learns a neural network representation for each word by optimizing Eq. 4. We study the following parameter sharing schemes: 1. pos pos pos all all all all all all : no parameter sharing. 2. pos pos pos all all all all all all : layer ` shared between all networks. 3. s pos all all all all all all : layer ` shared between terms with the same part-of-speech (pos) tag. For word2net, we experiment with the context dimensions K 2 f20; 100g. The context dimension is also the dimension of the input layer. For K D 20, we useH1 D 10 hidden units in the first hidden layer of each word network and H2 D 10 hidden units in the second layer. For K D 100, we use H1 D H2 D 20 hidden units. Without parameter sharing, the number of parameters per word is K C KH1 CH1H2 CH2. The shallow models have 2K parameters per term (the entries of the context nd word vectors). Since we want to compare models both in terms of context dimension K and in terms of total parameters, we fit the methods with K 2 f20; 165; 100; 1260g. We experiment with context sizes jcnj 2 f2; 4; 8g and we train all methods using stochastic gradient descent (sgd) (Robbins & Monro, 1951) with jSnj D 10 negative samples on the Wikipedia data and with jSnj D 20 negative samples on the Senate speeches. We use l2 regularization with standard deviation 10 for the word and context vectors, as well as weight decay for the neural networks. We use Adam (Kingma & Ba, 2015) with Tensorflow’s default settings (Abadi et al., 2016) to train all methods for up to 30000 iterations, using a minibatch size of 4069 or 1024. We assess convergence by monitoring the loss on a held-out validation set every 50 iterations, and we stop training when the average validation loss starts increasing. We initialize and freeze the context vectors of the word2net methods with the context vectors from a pretrained Bernoulli embedding with the same context dimension K. Network parameters are initialized according to standard initialization schemes of 4See http://nltk.org. 5See Appendix B for the detailed relationship between b-emb and cbow with negative sampling. 6The skip-gram objective is related to cbow/b-emb through Jensen’s inequality (see Appendix B). Table 2: Word2net outperforms existing word embedding models (skip-gram and b-emb/cbow) in terms of test log-likelihood on the Wikipedia data, both with and without pos tags. We compare models with the same context dimensionK and the same total number of parameters p=V for different context sizes (cs). (Results on more configurations are in Appendix A.) For word2net, we study different parameter sharing schemes, and the color coding indicates which layer is shared and how, as in Figure 1. Parameter sharing improves the performance of word2net, especially with pos tags. vocabulary K p=V cs 2 cs 4 cs 8 Mikolov et al. (2013b): skip-gram words 20 40 1:061 1:062 1:071 skip-gram tagged words 20 240 2:994 3:042 3:042 Mikolov et al. (2013b); Rudolph et al. (2016): b-emb/cbow words 20 40 1:023 0:976 0:941 b-emb/cbow words 165 330 1:432 1:388 1:381 b-emb/cbow tagged words 20 240 1:411 1:437 1:461 this work: sharing word2net pos pos pos all all all all all all 20 330 0:940 0:912 0:937 word2net pos pos pos all all all 20 120 1:040 1:003 0:964 word2net p s p s p s ll ll ll all all all 20 230 1:191 1:141 1:111 word2net pos pos pos all all all ll ll ll 20 320 0:863 0:881 0:890 word2net pos pos pos all all all all all all 20 120 0:918 0:914 0:871 word2net s s s ll ll ll ll 20 230 0:844 0:801 0:793 word2net pos pos pos all all all all all all 20 320 0:840 0:822 0:862 feed-forward neural networks (Glorot & Bengio, 2010), i.e., the weights are initialized from a uniform distribution with bounds˙ p 6= p Hin CHout. Quantitative results: Word2net has better predictive performance. We compute the predictive log-likelihood of the words in the test set, logp.wnv j cn/. For skip-gram, which was trained to predict the context words from the target, we average the context vectors ˛v for a fair comparison.7 Table 2 shows the results for the Wikipedia dataset. We explore different model sizes: with the same number of parameters as word2net, and with the same dimensionality K of the context vectors. For word2net, we explore different parameter sharing approaches. Table 5 in Appendix A shows the results for other model sizes (includingK D 100). In both tables, word2net without parameter sharing performs at least as good as the shallow models. Importantly, the performance of word2net improves with parameters sharing, and it outperforms the other methods. Tables 2 and 5 also show that b-emb/cbow and skip-gram perform poorly when we incorporate pos information by considering an augmented vocabulary of tagged words. The reason is that each term becomes less frequent, and these approaches would require more data to capture the cooccurrence patterns of tagged words. In contrast, word2net with pos parameter sharing provides the best predictions across all methods (including other versions of word2net). Finally, Table 6 in Appendix A shows the predictive performance for the U.S. Senate speeches. On this corpus, skip-gram performs better than b-emb/cbow and word2net without parameter sharing; however, word2net with pos sharing also provides the best predictions across all methods. Qualitative results: Word2net captures similarities and leverages syntactic information. Table 3 displays the similarity between word networks (trained on Wikipedia with parameter sharing at layer ` D 1), compared to the similarities captured by word embeddings (b-emb/cbow). For each query word, we list the three most similar terms, according to the learned representations. The word vectors are compared using cosine similarity, while the word networks are compared using Eq. 7. The table shows that word2net can capture latent semantics, even for less frequent words such as parrot. Table 4 shows similarities of models trained on the Senate speeches. In particular, the table compares: b-emb/cbow without pos information, b-emb/cbow trained on the augmented vocabulary of tagged words, and word2net with pos parameter sharing at the input layer (` D 1). We use Eq. 8 to compute the similarity across word networks with pos sharing. We can see that word2net is superior at incorporating syntactic information into the learned representations. For example, the most similar 7If we do not average, the held-out likelihood of skip-gram becomes worse. networks to the pronoun me are other pronouns such as myself, my, and himself. Word networks are often similar to other word networks with the same pos tag, but we also see some variation. One such example is in Figure 1c, which shows that the list of the 10 most similar words to the verb increase contains the adjective half. 4 Discussion We have presented word2net, a method for learning neural network representations of words. The word networks are used to predict the occurrence of words in small context windows and improve prediction accuracy over existing log-bilinear models. We combine the context vectors additively, but this opens the door for future research directions in which we explore other ways of combining the context information, such as accounting for the order of the context words and their pos tags. We have also introduced parameter sharing as a way to share statistical strength across groups of words and we have shown empirically that it improves the performance of word2net. Another opportunity for future work is to explore other types of parameter sharing besides pos sharing, such as sharing layers across documents or learning a latent group structure together with the word networks. A Additional results For completeness, we show here some additional results that we did not include in the main text for space constraints. In particular, Table 5 compares the test log-likelihood of word2net with the competing models— namely, skip-gram and b-emb/cbow. All methods are trained with negative sampling, as described in the main text. This table shows the results for the Wikipedia dataset, similarly to Table 2, but it includes other model sizes (i.e., another value of K). In this table, word2net with no parameter sharing performs similarly to b-emb/cbow with the same number of parameters, but its performance can be further improved with part-of-speech (pos) parameter sharing. Table 6 shows the test log-likelihood for the U.S. Senate speeches. Here, skip-gram is the best method that does not use pos tags, but it is outperformed by word2net with pos parameter sharing. all all all all ll ll B Relation between Bernoulli embeddings and word2vec Word2vec (Mikolov et al., 2013b) is one of the most widely used method for learning vector representations of words. There are multiple ways to implement word2vec. First, there is a choice of the objective. Second, there are several ways of how to approximate the objective to get a scalable algorithm. In this section, we describe the two objectives, continuous bag-of-words (cbow) and skip-gram, and we focus on negative sampling as the method of choice to achieve scalability. We describe the similarities and differences between Bernoulli embeddings (Rudolph et al., 2016) and these two objectives. In summary, under certain assumptions Bernoulli embeddings are equivalent to cbow with negative sampling, and are related to skip-gram through Jensen’s inequality. b-emb cbow (negative sampling) First we explain how Bernoulli embeddings and cbow with negative sampling are related. Consider the Bernoulli embedding full objective, L. ; ˛/ D X n 0@ X vW wnvD1 log . >v ˙n/C X vW wnvD0 log . >v ˙n/ 1A : (9) In most cases, the summation over negative examples (wnv D 0) is computationally expensive to compute. To address that, we form an unbiased estimate of that term by subsampling a random set Sn Table 6: Comparison of the test log-likelihood across different models on the Senate speeches. We compare models with the same context dimension K and the same total number of parameters p=V for different context sizes (“cs”). For word2net, we explore different parameter sharing schemes. The color coding of the parameter sharing (same as Figure 1) indicates which layer is shared and how. vocabulary K p=V cs 2 cs 4 cs 8 Mikolov et al. (2013b): skip-gram words 20 40 1:052 1:080 1:061 skip-gram tagged words 20 240 1:175 1:199 1:227 Mikolov et al. (2013b); Rudolph et al. (2016): b-emb/cbow words 20 40 1:274 1:246 1:222 b-emb/cbow tagged words 20 240 1:352 1:340 1:339 b-emb/cbow words 165 330 1:735 1:734 1:744 this work: sharing word2net pos pos pos all all all all all all 20 330 1:406 1:555 1:401 word2net pos pos pos all all all 20 120 1:276 1:256 1:243 word2net p s p s p s ll ll ll all all all 20 230 1:462 1:435 1:413 word2net pos pos pos all all all all all all 20 120 0:873 0:860 0:850 word2net s s s ll ll ll ll 20 230 1:057 1:034 1:015 of terms and rescaling by V 1 jSnj , bL. ; ˛/ DX n 0@ X vW wnvD1 log . >v ˙n/C V 1 jSnj X v2Sn log . >v ˙n/ 1A : (10) Here, we have introduced an auxiliary coefficient . The estimate is unbiased only for D 1; however, Rudolph et al. (2016) showed that downweighting the contribution of the zeros works better in practice.8 In particular, if we set the downweight factor as D jSnj V 1 , we recover the objective of cbow with negative sampling, bL. ; ˛/ DX n 0@ X vW wnvD1 log . >v ˙n/C X v2Sn log . >v ˙n/ 1A LCBOW. ; ˛/ (11) There are two more subtle theoretical differences between both. The first difference is that Bernoulli embeddings include a regularization term for the embedding vectors, whereas cbow does not. The second difference is that, in Bernoulli embeddings, we need to draw a new set of negative samples Sn at each iteration of the gradient ascent algorithm (because we form a noisy estimator of the downweighted objective). In contrast, in cbow with negative sampling, the samples Sn are drawn once in advance and then hold fixed. In practice, for large datasets, we have not observed significant differences in the performance of both approaches. For simplicity, we draw the negative samples Sn only once. cbow (negative sampling) skip-gram (negative sampling) Now we show how cbow and skip-gram are related (considering negative sampling for both). Recall that the objective of cbow is to predict a target word from its context, while the skip-gram objective is to predict the context from the target word. Negative sampling breaks the multi-class constraint that the sum of the probability of each word must equal one, and instead models probabilities of the individual entries of the one-hot vectors representing the words. When we apply negative sampling, the cbow objective becomes Eq. 11. The skip-gram objective is given by Lskip-gram. ; ˛/ D X .n;v/W wnvD1 0@X v02cn log >v ˛v0 C X v02Sn log >v ˛v0 1A ; (12) 8This is consistent with the approaches in recommender systems (Hu et al., 2008). That is, for each target term wnv , the cbow objective has one term while the skip-gram objective has jcnj terms. Consider a term .n; v/ for which wnv D 1. We take the corresponding cbow term from Eq. 11 and we apply Jensen’s inequality to obtain the corresponding skip-gram term in Eq. 12: log . >v ˙n/ D log 0@ >v X v02cn ˛v0 1A X v02cn log >v ˛v0 : (13) Here, we have made use of the concavity of the log . / function. In general, this is a consequence of the convexity of the log-normalizer of the (Bernoulli) exponential family distribution. This holds for the “positive” examples wnv . As for the negative examples (wnv D 0), the comparison is not as straightforward, because the choice of terms in Eqs. 11 and 12 is not exactly the same. In particular, Eq. 11 holds v0 fixed and draws v from the noise distribution, while Eq. 12 holds v fixed and draws v0 from the noise distribution. C Data preprocessing In this paper we study Wikipedia articles (text8) and a corpus of U.S. Senate speeches. On both corpora, we restrict the vocabulary to the 15K most frequent words, replacing all the remaining words with a designated token. We annotate the data using nltk tagger9 or the Stanford CoreNLP tagger (Manning et al., 2014), using the universal tagset shown in Table 7. The Senate speeches contain a lot of boilerplate repetitive language; for this reason, we tokenize around 350 frequent phrases, such as senator from alabama or united states, considering the entire phrase an individual vocabulary term. We apply the pos tagger before this tokenization step, and then we assign the noun tag to all phrases. We split the data into training (90%), testing (5%), and validation (5%) sets. We use the validation set to assess convergence, as explained in the main text. We subsample the frequent words following Mikolov et al. (2013b); i.e., each word wn in the training set is discarded with probability Prob.wn is discarded/ D 1 s t frequency.wn/ ; (14) where frequency.wn/ denotes the frequency of word wn, and t D 10 5. For each method, we use jSnj D 10 negative samples on the Wikipedia articles and jSnj D 20 negative samples on the Senate speeches. Following Mikolov et al. (2013b), we draw the negative samples from the unigram distribution raised to the power of 0:75. 9See http://nltk.org
1. What is the focus of the paper regarding neural language models? 2. What are the strengths of the proposed approach, particularly in utilizing side information? 3. What are the weaknesses of the paper, especially in its evaluation methodology? 4. How does the reviewer assess the significance of the proposed method? 5. Are there any concerns regarding the comparison with other works in the field?
Review
Review This paper presents another variant on neural language models used to learn word embeddings. In keeping with the formulation of Mikolov et al, the model learned is a set of independent binary classifiers, one per word. As opposed to other work, each classifier is not based on the dot product between an embedding vector and a context vector but instead is a per-word neural network which takes the context as input and produces a score for each term. An interesting consequence of using networks instead of vectors to parametrize the embeddings is that it's easy to see many ways to let the model use side information such as part-of-speech tags. The paper explores one such way, by sharing parameters across networks of all words which have the same POS tag (effectively having different parameterizations for words which occur with multiple POS tags). The idea is interesting but the evaluation leaves doubts. Here are my main problems: 1. The quantitative likelihood-based evaluation can easily be gamed by making all classifiers output numbers which are close to 1. This is because the model is not normalized, and no attempt at normalization is claimed to be made during the likelihood evaluation. This means it's likely hyperparameter tuning (of, say, how many negative examples to use per positive example) is likely to bias this evaluation to look more positive than it should. 2. The qualitative similarity-based evaluation notes, correctly, that the standard metric of dot product / cosine between word embeddings does not work in the case of networks, and instead measures similarity by looking at the similarity of the predictions of the networks. Then all networks are ranked by similarity to a query network to make the now-standard similar word lists. While this approach is interesting, the baseline models were evaluated using the plain dot product. It's unclear whether this new evaluation methodology would have also produced nicer word lists for the baseline methods. In the light that the evaluation has these two issues I do not recommend accepting this paper.
ICLR
Title Laplacian Networks: Bounding Indicator Function Smoothness for Neural Networks Robustness Abstract For the past few years, Deep Neural Network (DNN) robustness has become a question of paramount importance. As a matter of fact, in sensitive settings misclassification can lead to dramatic consequences. Such misclassifications are likely to occur when facing adversarial attacks, hardware failures or limitations, and imperfect signal acquisition. To address this question, authors have proposed different approaches aiming at increasing the robustness of DNNs, such as adding regularizers or training using noisy examples. In this paper we propose a new regularizer built upon the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DNN architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. Since it is agnostic to the type of deformations that are expected when predicting with the DNN, the proposed regularizer can be combined with existing ad-hoc methods. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness of DNNs on classical supervised learning vision datasets. 1 Introduction Deep Neural Networks (DNNs) provide state-of-the-art performance in many challenges in machine learning (He et al., 2016; Wu et al., 2016). Their ability to achieve good generalization is often explained by the fact they use very few priors about data (LeCun et al., 2015). On the other hand, their strong dependency on data may lead to focus on biased features of the training dataset, resulting in a nonrobust classification performance. In the literature, authors have been interested in studying the robustness of DNNs in various conditions. These conditions include: • Robustness to isotropic noise, i.e., small isotropic variations of the input (Mallat, 2016), typically meaning that the network function leads to a small Lipschitz constant. • Robustness to adversarial attacks, which can exploit knowledge about the network parameters or the training dataset (Szegedy et al., 2013; Goodfellow et al., 2014). • Robustness to implementation defects, which can result in only approximately correct computations (Hubara et al., 2017). To improve DNN robustness, three main families of solutions have been proposed in the literature. The first one involves enforcing smoothness, as measured by a Lipschitz constant, in the operators and having a minimum separation margin (Mallat, 2016). A similar approach has been proposed in (Cisse et al., 2017), where the authors restrict the function of the network to be contractive. A second class of methods use intermediate representations obtained at various layers to perform the prediction (Papernot and McDaniel, 2018). Finally, in (Kurakin et al., 2016; Pezeshki et al., 2016; Madry et al., 2018), the authors propose to train the network using noisy inputs so that it better generalizes to this type of noise. This has been shown to improve the robustness of the network to the specific type of noise used during training, but it is not guaranteed that this robustness would be extended to other types of deformations. In this work, we introduce a new regularizer that does not focus on a specific type of deformation, but aims at increasing robustness in general. As such, the proposed regularizer can be combined with other existing methods. It is inspired by recent developments in Graph Signal Processing (GSP) (Shuman et al., 2013). GSP is a mathematical framework that extends classical Fourier analysis to complex topologies described by graphs, by introducing notions of frequency for signals defined on graphs. Thus, signals that are smooth on the graph (i.e., change slowly from one node to its neighbors) will have most of their energy concentrated in the low frequencies. The proposed regularizer is based on constructing a series of graphs, one for each layer of the DNN architecture, where each graph captures the similarity between all training examples given their intermediate representation at that layer. Our proposed regularizer penalizes large changes in the smoothness of class indicator vectors (viewed here as graph signals) from one layer to the next. As a consequence, the distances between pairs of examples in different classes are only allowed to change slowly from one layer to the next. Note that because we use deep architectures, the regularizer does not prevent the smoothness from achieving its maximum value, but constraining the size of changes from layer to layer increases the robustness of the network function by controlling the distance to the boundary region, as supported by experiments in Section 4. The outline of the paper is as follows. In Section 2 we present related work. In Section 3 we introduce the proposed regularizer. In Section 4 we evaluate the performance of our proposed method in various conditions and on vision benchmarks. Section 5 summarizes our conclusions. 2 Related work DNN robustness may refer to many different problems. In this work we are mostly interested in the stability to deformations (Mallat, 2016), or noise, which can be due to multiple factors mentioned in the introduction. The most studied stability to deformations is in the context of adversarial attacks. It has been shown that very small imperceptible changes on the input of a trained DNN can result in missclassification of the input (Szegedy et al., 2013; Goodfellow et al., 2014). These works have been primordial to show that DNNs may not be as robust to deformations as the test accuracy benchmarks would have lead one to believe. Other works, such as (Recht et al., 2018), have shown that DNNs may also suffer from drops in performance when facing deformations that are not originated from adversarial attacks, but simply by re-sampling the test images. Multiple ways to improve robustness have been proposed in the literature. They range from the use of a model ensemble composed of k-nearest neighbors classifiers for each layer (Papernot and McDaniel, 2018), to the use of distillation as a mean to protect the network (Papernot et al., 2016a). Other methods introduce regularizers (Gu and Rigazio, 2014), control the Lipschitz constant of the network function (Cisse et al., 2017) or implement multiple strategies revolving around using deformations as a data augmentation procedure during the training phase (Goodfellow et al., 2014; Kurakin et al., 2016; Moosavi Dezfooli et al., 2016). Compared to these works, our proposed method can be viewed as a regularizer that penalizes large deformations of the class boundaries throughout the network architecture, instead of focusing on a specific deformation of the input. As such, it can be combined with other mentioned strategies. Indeed, we demonstrate that the proposed method can be implemented in combination with (Cisse et al., 2017), resulting in a network function such that small variations to the input lead to small variations in the decision, as in (Cisse et al., 2017), while limiting the amount of change to the class boundaries. Note that our approach does not require using training data affected by a specific deformation, and our results could be further improved if such data were available for training, as shown in the Appendix. As for combining GSP and machine learning, this area has sparked interest recently. For example, the authors of (Gripon et al., 2018) show that it is possible to detect overfitting by tracking the evolution of the smoothness of a graph containing only training set examples. Another example is in (Anirudh et al., 2017) where the authors introduce different quantities related to GSP that can be used to extract interpretable results from DNNs. In (Svoboda et al., 2018) the authors exploit graph convolutional layers (Bronstein et al., 2017) to increase the robustness of the network. To the best of our knowledge, this is the first use of graph signal smoothness as a regularizer for deep neural network design. 3 Methodology 3.1 Similarity preset and postset graphs Consider a deep neural network architecture. Such a network is obtained by assembling layers of various types. Of particular interest are layers of the form x` 7→ x`+1 = h`(W`x` + b`), where h` is a nonlinear function, typically a ReLU, W` is the weight tensor at layer `, x` is the intermediate representation of the input at layer ` and b` is the corresponding bias tensor. Note that strides or pooling may be used. Assembling can be achieved in various ways: composition, concatenation, sums. . . so that we obtain a global function f that associates an input tensor x0 to an output tensor y = f(x0). When computing the output y associated with the input x0, each layer ` of the architecture processes some input x` and computes the corresponding output y` = h`(W`x` + b`). For a given layer ` and a batch of b inputs X = {x1, . . . ,xb}, we can obtain two sets X ` = {x`1, . . . ,x`b}, called the preset, and Y` = {y`1, . . . ,y`b}, called the postset. Given a similarity measure s on tensors, from a preset we can build the similarity preset matrix: M`pre[i, j] = s(x`i ,x`j),∀1 ≤ i, j ≤ b, where M[i, j] denotes the element at line i and column j in M. The postset matrix is defined similarly. Consider a similarity (either preset or postset) matrix M`. This matrix can be used to build a k-nearest neighbor similarity weighted graph G` = 〈V,A`〉, where V = {1, . . . , b} is the set of vertices and A` is the weighted adjacency matrix defined as: A`[i, j] = M `[i, j] if M`[i, j] ∈ arg maxi′ 6=j (M`[i′, j], k)⋃ arg maxj′ 6=i (M `[i, j′], k) 0 otherwise ,∀i, j ∈ V, (1) where arg maxi(ai, k) denotes the indices of the k largest elements in {a1, . . . , ab}. Note that by construction A` is symmetric. 3.2 Smoothness of label signals Given a weighted graph G` = 〈V,A`〉, we call Laplacian of G` the matrix L` = D` −A`, where D` is the diagonal matrix such that: D`[i, i] = ∑ j A `[i, j],∀i ∈ V . Because L` is symmetric and real-valued, it can be written: L` = F`Λ`F`>, (2) where F is orthonormal and contains eigenvectors of L` as columns, F> denotes the transpose of F, and Λ is diagonal and contains eigenvalues of L` is ascending order. Note that the constant vector 1 ∈ Rb is an eigenvector of L` corresponding to eigenvalue 0. Moreover, all eigenvalues of L` are nonnegative. Consequently, 1/ √ n can be chosen as the first column in F. Consider a vector s ∈ Rb, we define ŝ the Graph Fourier Transform (GFT) of s on G` as (Shuman et al., 2013): ŝ = F>s. (3) Because the order of the eigenvectors is chosen so that the corresponding eigenvalues are in ascending order, if only the first few entries of ŝ are nonzero that indicates that s is low frequency (smooth). In the extreme case where only the first entry of ŝ is nonzero we have that s is constant (maximum smoothness). More generally, smoothness σ`(s) of a signal s can be measured using the quadratic form of the Laplacian: σ`(s) = s>L`s = b∑ i,j=1 A`[i, j](s[i]− s[j])2 = b∑ i=1 Λ`[i, i]ŝ[i]2, (4) where we note that s is smoother when σ`(s) is smaller. In this paper we are particularly interested in smoothness of the label signals. We call label signal sc associated with class c a binary ({0, 1}) vector whose nonzero coordinates are the ones corresponding to input vectors of class c. In other words, sc[i] = 1 ⇔ (xi is in class c),∀1 ≤ i ≤ b. Using Equation (4), we obtain that the smoothness of the label signal sc is the sum of similarities between examples in distinct classes. Thus a smoothness of 0 means that examples in distinct classes have 0 similarity. Denote u the last layer of the architecture: yui = yi,∀i. Note that in typical settings, where outputs of the networks are one-hot-bit encoded and no regularizer is used, at the end of the learning process it is expected that y>i yj ≈ 1 if i and j belong to the same class, and y>i yj ≈ 0 otherwise. Thus, assuming that cosine similarity is used to build the graph, the last layer smoothness for all c would be σupost(sc) ≈ 0, since edge weights between nodes having different labels will be close to zero given Equation (4). More generally, smoothness of sc at the preset or postset of a given layer measures the average similarity between examples in class c and examples in other classes (σ(sc) decreases as the weights of edges connecting nodes in different classes decrease). Because the last layer can achieve σ(sc) ≈ 0, we expect the smoothness metric σ at each layer to decrease as we go deeper in the network. Next we introduce a regularization strategy that limits how much σ can decrease from one layer to the next and can even prevent the last layer from achieving σ(sc) = 0. This will be shown to improve generalization and robustness. The theoretical motivation for this choice is discussed in Section 3.4. 3.3 Proposed regularizer 3.3.1 Definition We propose to measure the deformation induced by a given layer ` in the relative positions of examples by computing the difference between label signal smoothness before and after the layer, averaged over all labels: δ`σ = ∣∣∣∣∣∑ c [ σ`post(sc)− σ`pre(sc) ]∣∣∣∣∣ . (5) These quantities are used to regularize modifications made to each of the layers during the learning process. Remark 1: Since we only consider label signals, we solely depend on the similarities between examples that belong to distinct classes. In other words, the regularizer only focuses on the boundary region, and does not vary if the distance between examples of the same label grows or shrinks. This is because forcing similarities between examples of a same class to evolve slowly could prevent the network to train appropriately. Remark 2: Compared with (Cisse et al., 2017), there are three key differences that characterize the proposed regularizer: 1. Not all pairwise distances are taken into account in the regularization; only distances between examples corresponding to different classes play a role in the regularization. 2. We allow a limited amount of both contraction and dilation of the metric space. Experimental work (e.g. (Gripon et al., 2018; Papernot and McDaniel, 2018)) has Initial problem: Class domains boundary shown that the evolution of metric spaces across DNN layers is complex, and thus restricting ourselves to contractions only could lead to lower overall performance. 3. The proposed criterion is an average (sum) over all distances, rather than a stricter criterion (e.g. Lipschitz), which would force each pair of vectors (xi,xj) to obey the constraint. Illustrative example: In Figure 1 we depict a toy illustrative example to motivate the proposed regularizer. We consider here a one-dimensional two-class problem. To linearly separate circles and crosses, it is necessary to group all circles. Without regularization (setting i)), the resulting embedding is likely to increase considerably the distance between examples and the size of the boundary region between classes. In contrast, by penalizing large variations of the smoothness of label signals (setting ii)), the average distance between circles and crosses must be preserved in the embedding domain, resulting in a more precise control of distances within the boundary region. 3.4 Motivation: label signal bandwidth and powers of the Laplacian Recent work (Anis et al., 2017) develops an asymptotic analysis of the bandwidth of label signals, BW (s), where bandwidth is defined as the highest non-zero graph frequency of s, i.e., the nonzero entry of ŝ with the highest index. An estimate of the bandwidth can be obtained by computing: BWm(s) = ( s>Lms s>s )(1/m) (6) for large m. This can be viewed as a generalization of the smoothness metric of (4). (Anis et al., 2017) shows that, as the number of labeled points x (assumed drawn from a distribution p(x)) grows asymptotically, the bandwidth of the label signal converges in probability to the supremum of p(x) in the region of overlap between classes. This motivates our work in three ways. First, it provides theoretical justification to use σ`(s) for regularization, since lower values of σ`(s) are indicative of better separation between classes. Second, the asymptotic analysis suggests that using higher powers of the Laplacian would lead to better regularization, since estimating bandwidth using BWm(s) becomes increasingly accurate as m increases. Finally, this regularization can be seen to be protective against specializing by preventing σ`(s) from decreasing “too fast”. For most problems of interest, given a sufficiently large amount of labeled data available, it would be reasonable to expect the bandwidth of s not to be arbitrarily small, because the classes cannot be exactly separated, and thus a network that reduces the bandwidth too much can result in being biased by the training set. 3.5 Analysis of the Laplacian powers In Figure 2 we depict the Laplacian and squared Laplacian of similarity graphs obtained at different layers in a trained vanilla architecture. On the deep layers, we can clearly see blocks corresponding to the classes, while the situation in the middle layer is not as clear. This figure illustrates how using the squared Laplacian helps modifying the distances to improve separation. Note that we normalize the squared Laplacian values by dividing them by the highest absolute value. In Figure 3, we plot the average evolution of smoothness of label signals over 100 batches, as a function of layer depth in the architecture, and for different choices of the regularizer. In the left part, we look at smoothness measures using the Laplacian. In the right part, we use the squared Laplacian. We can clearly see the effectiveness of the regularizer in enforcing small variations of smoothness across the architecture. Note that for model regularized with L2, changes in smoothness measured by L are not easy to see. This seems to suggest that some of the gains achieved via L2 regularization come in making changes that would be “invisible” when looking at the layers from the perspective of L smoothness. The same normalization from Figure 2 is used for L2. 4 Experiments In the following paragraphs we evaluate the proposed method using various tests. We use the well known CIFAR-10 (Krizhevsky and Hinton, 2009) dataset made of tiny images. As far as the DNN is concerned, we use the same PreActResNet (He et al., 2016) architecture for all tests, with 18 layers. All inputs, including those on the test set, are normalized based on the mean and standard deviation of the images of the training set. In all figures, P are SNR ≈ ∞ SNR ≈ 20 SNR ≈ 15 Parseval trained networks, R are networks trained with the proposed regularizer and V are vanilla networks. More details and experiments can be found at the Appendix. We depict the obtained results using box plots where data is aggregated from 10 different networks corresponding to different random seeds and batch orders. In the first experiment (left most plot) in Figure 4, we plot the baseline accuracy of the models on the clean test set (no deformation is added at this point). These experiments agree with the claim from (Cisse et al., 2017) where the authors show that they are able to increase the performance of the network on the clean test set. We observe that our proposed method leads to a minor decrease of performance on this test. However, we see in the following experiments that this is mitigated with increased robustness to deformations. Such a trade-off between robustness and accuracy has already been discussed in the literature (Fawzi et al., 2018). 4.1 Isotropic deformation In this scenario we evaluate the robustness of the network function to small isotropic variations of the input. We generate 40 different deformations using random variables N (0, 0.25) which are added to the test set inputs. Note that they are scaled so that SNR ≈ 15 and SNR ≈ 20. The middle and right-most plots from Figure 4 show that the proposed method increases the robustness of the network to isotropic deformations. Note that in both scenarios the best results are achieved by combining Parseval training and our proposed method (lower-most box on both figures). 4.2 Adversarial Robustness We next evaluate robustness to adversarial inputs, which are specifically built to fool the network function. Such adversarial inputs can be generated and evaluated in multiple ways. Here we implement two approaches: first a mean case of adversarial noise, where the adversary can only use one forward and one backward pass to generate the deformations, and second a worst case scenario, where the adversary can use multiple forward and backward passes to try to find the smallest deformation that will fool the network. For the first approach, we add the scaled gradient sign (FGSM attack) on the input (Kurakin et al., 2016), so that we obtain a target SNR = 33. Results are depicted in the left and center plots of Figure 5. In the left plot the noise is added after normalizing the input whereas on the middle plot it is added before normalizing. As in the isotropic noise case, a combination of the Parseval method and our proposed approach achieves maximum robustness. In regards to the second approach, where a worst case scenario is considered, we use the Foolbox toolbox (Rauber et al., 2017) implementation of DeepFool (Moosavi Dezfooli et al., 2016). Due to time constraints we sample only 110 of the test set images for this test. The conclusions are similar (right plot of Figure 5) to those obtained for the first adversarial attack approach. 4.3 Implementation robustness Finally, in a third series of experiments we evaluate the robustness of the network functions to faulty implementations. As a result, approximate computations are made during the test phase that consist of random erasures of the memory (dropout) or quantization of the weights (Hubara et al., 2017). In the dropout case, we compute the test set accuracy when the network has a probability of either 25% or 40% of dropping a neuron’s value after each block. We run each experiment 40 times. The results are depicted in the left and center plots of Figure 6. It is interesting to note that the Parseval trained functions seem to drop in performance as soon as we reach 40% probability of dropout, providing an average accuracy smaller than the vanilla networks. In contrast, the proposed method is the most robust to these perturbations. For the quantization of the weights, we consider a scenario where the network size in memory has to be shrink 6 times. We therefore quantize the weights of the networks to 5 bits (instead of 32) and re-evaluate the test set accuracy. The right plot of Figure 6 shows that the proposed method is providing a better robustness to this kind of deformation than the tested counterparts. 5 Conclusion In this paper we have introduced a new regularizer that enforces small variations of the smoothness of label signals on similarity graphs obtained at intermediate layers of a deep neural network architecture. We have empirically shown with our tests that it can lead to improved robustness in various conditions compared to existing counterparts. We also demonstrated that combining the proposed regularizer with existing methods can result in even better robustness for some conditions. Future work includes a more systematic study of the effectiveness of the method with regards to other datasets, models and deformations. Recent works shown adversarial noise is partially transferable between models and dataset (Moosavi-Dezfooli et al., 2017; Papernot et al., 2016b) and therefore we are confident about the generality of the method in terms of models and datasets. One possible extension of the proposed method is to use it in a fine-tuning stage, combined with different techniques already established on the literature. An extension using a combination of input barycenter and class barycenter signals instead of the class signal could be interesting as that would be comparable to (Zhang et al., 2017). In the same vein, using random signals could be beneficial for semi-supervised or unsupervised learning challenges. A Parseval Training and implementation We compare our results with those obtained using the method described in (Cisse et al., 2017). There are three modifications to the normal training procedure: orthogonality constraint, convolutional renormalization and convexity constraint. For the orthogonality constraint we enforce Parseval tightness (Kovačević and Chebira, 2008) as a layer-wise regularizer: Rβ(W `) = β 2 ‖W `>W ` − I‖22, (7) where W` is the weight tensor at layer `. This function can be approximately optimized with gradient descent by doing the operation: W ` ← (1 + β)W ` − βW `W `>W `. (8) Given that our network is smaller we can apply the optimization to the entirety of the W , instead of 30% as per the original paper, this increases the strength of the Parseval tightness. For the convolutional renormalization, each matrixW ` is reparametrized before being applied to the convolution as W ` √ 2ks+1 , where ks is the kernel size. For our architecture the inputs from a layer come from either one or two different layers. In the case where the inputs come from only one layer, α the convexity constraint parameter is set to 1. When the inputs come from the sum of two layers we use α = 0.5 as the value for both of them, which constraints our Lipschitz constant, this is softer than the convexity constraint from the original paper. B Hyperparameters We train our networks using classical stochastic gradient descent with momentum (0.9), with batch size of b = 100 images and using a L2-norm weight decay with a coefficient of λ = 0.0005. We do a 100 epoch training. Our learning rate starts at 0.1. After half of the training (50 epochs) the learning rate decreases to 0.001. We use the mean of the difference of smoothness between successive layers in our loss function. Therefore in our loss function we have: L = CategoricalCrossEntropy + λWeightDecay + γ∆ (9) where ∆ = 1d−1 ∑d `=1 |δ`σ|. We perform experiments using various powers of the Laplacian m = 1, 2, 3, in which case the scaling coefficient γ is put to the same power as the Laplacian. We tested multiple parameters of β, the Parseval tightness parameter, γ the weight for the smoothness difference cost and m the power of the Laplacian. We found that the best values for this specific architecture, dataset and training scheme were: β = 0.01, γ = 0.01,m = 2, k = b. C Depiction of the network Figure 7 depicts the network used on all experiments of sections 3 and 4. f = 64 is the filter size of the first layer of the network. Conv layers are 3x3 layers and are always preceded by batch normalization and relu (except for the first layer which receives just the input). The smoothness gaps are calculated after each ReLU. D Additional experiments Given suggestions from the reviewers, we performed additional experiments to further demonstrate the capabilities of the proposed regularizer. Due to the lack of space they could not be added to the main paper. We consider the effects of the regularizer when applied on another datasets. We also consider the effects of adding adversarial data augmentation methods while minimizing the amount of other influencing factors. We first look at the results when using the same architecture as for the CIFAR-10 dataset, which inevitably results in far from state-of-the-art accuracy on CIFAR-100. Then, we perform experiments using a different architecture (namely WideResnet 28-10, with dropout) for CIFAR-100. D.1 CIFAR-10 We add two types of tests for the CIFAR-10 dataset: adversarial data augmentation during training and black-box FGSM. D.1.1 Tests with FGSM adversarial data augmentation In this section we consider tests adding adversarial data augmentation as suggested in (Kurakin et al., 2016). To be more precise we use the method they advise which is called "step1.1" using = 8255 . The results presented in the figures below are obtained by running 10 experiments with random initializations. We first perform the same tests as in Section 4. As expected, we observe in Figure 8 that training with adversarial examples help in the case of Gaussian noise, as it adds more variation to the training set, while reducing the accuracy on the clean set. Note that combining our method with adversarial training results in the best median accuracy. Combining the three methods is less successful than expected, which could indicate that a better hyperparameter search would be needed. Considering adversarial robustness, the obtained results are depicted in Figure 9. We observe that adding FGSM adversarial training does not generalize well to other types of attack (which is readily seen in the literature Madry et al. (2018)). Overall, the models using the proposed regularizer are the most robust. Finally, when considering implementation related perturbations, the results depicted in Figure 10 are consistent with the ones from Section 4.3, in which is shown that the proposed regularizer helps improving robustness. In summary, even when adding adversarial training, the proposed regularizer is either the most robust in median, or capable of improving the robustness when used combined with the other methods. D.1.2 Tests with black box FGSM To further verify that the obtained results are not only due to gradient masking, we perform tests with black box FGSM, where the target attacked network is not the same as the source of the adversarial noise. For this test we set the SNR of FGSM to 33. We chose the network with the best performance for each of the tested methods. The results are depicted in Table 1. In our experiments, we found that the combination of our method with Parseval is the most robust to noise coming from other sources, while the noise created by both Parseval and our method did not generalize as well as the one created by Vanilla. This demonstrates that the improvements are not caused by gradient masking, but are caused by the increased robustness of the proposed method and Parseval’s. D.1.3 Tests with PGD adversarial data augmentation Most of our adversarial tests are performed with FGSM because of its simplicity and speed, even though it has already been shown (e.g: Madry et al. (2018)) that FGSM is weak as an attack and as a defense mechanism. Despite the fact we do not only target adversarial defense, we further stress the ability of the proposed regularizer to improve it and to combine with other methods. To this end we perform experiments against the PGD (Projected Gradient Descent) attack. PGD is an iterative version of FGSM, which run for a maximum number of iterations it or until convergence. For each iteration it moves by a distance of step in the direction of the gradient provided it does not go at a distance greater than from the original image. Our experiments show that the proposed regularizer increases robustness against a weak PGD attack (similar epsilon as our FGSM with SNR=33), but it is almost completely defeated by the PGD with the parameters from (Madry et al., 2018). The results are depicted in table 2. We also show that, as expected, FGSM training does not add significant robustness against the stronger PGD attack. As the proposed regularizer can be combined with FGSM defense, it is natural to also test it alongside PGD training. We use the parameters advised in (Madry et al., 2018): 7 iterations with step = 2/255, and = 8/255. The results depicted in Table 3 show that using our regularizer increases robustness of networks trained with PGD. Note that Dropout and Gaussian Noise were applied ten times to each of the networks and the results are displayed as the mean test set accuracy under these perturbations. A rate of 40% was used for dropout. The PGD attack uses the following parameters: it = 20, step = 2255 , = 8 255 . We test the generality of the method using the CIFAR-100 dataset. Results are shown in Table 4 as the mean over three different initializations. Dropout and Gaussian Noise are applied ten times to each of the networks for a total of 30 different runs. An SNR of 33 is used for FGSM, and a rate of 25% is used for dropout. Images are normalized in the same way as the experiments with CIFAR-10. Due to time constraints we sample only 110 of the images from the test set for the Deep Fool test. The proposed regularizer is the most robust on all categories, while Parseval has problems with the perturbations, despite yielding the best accuracy on the clean test set. The combination of the proposed regularizer and the parseval training method is not able to reproduce the good results from the CIFAR-10 dataset. The results shown in Table 4 are obtained using an architecture that is not performing very well on the clean test set for the CIFAR-100 dataset. We thus performed additional experiments using the WideResNet 28-10 (Zagoruyko and Komodakis, 2016) architecture, and we added standard data augmentation (random crops and random horizontal flipping) and dropout with probability of 30% after the first convolution of each residual block. We train for 200 epochs, starting with a learning rate of 0.1 and divide the learning rate by 5 in epochs 60, 120 and 160. Momentum of 0.9 is used and weight decay of 5e-4. We use the value from the Parseval paper (β = 0.0003) as in this case it provided better results than the one described in Section B. Results on the WideResNet 28-10 architecture using data augmentation are shown in Table 5. We observe that the proposed method (sometimes with combinations with other methods) is still the most robust. E Impact of the proposed regularizer on the boundary We look at the impact of the proposed regularizer on the boundary region. To this end, we choose 10 pairs of points in distinct classes that are the most similar (i.e. their distance is minimal) in the input space and we look at the decision of the network function along the segment between them. The average is depicted in Figure 11. Note that the point to the left is always chosen to be the one corresponding to the decision of the network at the middle of the segment, so that the average curve is asymmetric. Interestingly, we observe that the proposed regularizer is the one for which the boundary is closest to the middle of the segments, thus proving our claim that the proposed regularizer control the boundary region. F Regularizer pseudo-code Below in Algorithm 1 we describe how we use the proposed regularizer to compute the loss as a pseudo-code. This function receives five inputs: 1. listactivations: the list of the intermediate features right after each call of the ReLU activation function of the network. We call these intermediate features activations` where ` represents the depth of the network; 2. y: the output of the network; 3. s: the label signal of the batch. Otherwise said, the ground truth labels of the examples of the batch; 4. m: the power of the Laplacian for which we wish to compute the smoothness; 5. γ: the scaling coefficient of the regularizer loss. Algorithm 1: Loss function of the regularized network 1: procedure Smoothness(activations`, s,m) 2: A` ← Pairwise cosine similarity of activations` 3: D` ← Diagonal degree matrix of A` 4: L` ← D` −A` 5: σ` ← Trace(sᵀ(L`)ms) 6: return σ` 7: procedure Loss(listactivations,y, s,m, γ) 8: for activations` ∈ listactivations do σ` ← Smoothness(activations`, s,m) 9: ∆← ∑`max i=1 |σ i−σi−1| `max−1 10: return CategoricalCrossEntropy(s,y) + γm∆
1. What are the strengths and weaknesses of the paper regarding its contributions to improving neural network robustness? 2. How does the reviewer assess the clarity and quality of the paper's content, particularly in figure presentation and experimental design? 3. Are there any concerns or suggestions regarding the choice of Laplacian power and the tradeoff between robustness and performance? 4. Would it be beneficial to explore the application of the regularizer to both original and data-augmented networks? 5. What are the implications of transferring perturbations, and how might this aspect be further explored?
Review
Review This paper proposes the interesting addition of a graph-based regularisers, in NNs architectures, for improving their robustness to different perturbations or noise. The regularisation enforces smoothness on a graph built on the different features at different layers of the NN system. The proposed ideas are quite interesting, and integrates nicely into NN architectures. A few paths for improvements: - the 'optimal' choice of the power of the Laplacian, in 3.5, is eluded - the figures are not presented ideally, nor in a very readable form - for example, their are 90-degree rotated compared to classical presentations, and the plots are hardly readable - the might exist a tradeoff between robustness, and performance (accuracy), that seem to be explaining the proposed results (see Fawzi - Machine Learning 2018, for example) - in 4.2, what is a mean case of adversarial noise? Also, it would be good to see the effect of the regularizer of both the 'original' network, and on the network trained with data augmentation. It is not clear which one is considered here, but it would be interesting to study both, actually. - the second paragraph of the conclusion (transfer of perturbations) opens interesting perspective, but the problem might not be as trivial as the authors seem to hint in the text. Overall, very interesting and nice work, which might be better positioned (especially in terms of experiments) wrt to other recent methods that propose to improve robustness in NNs.
ICLR
Title Laplacian Networks: Bounding Indicator Function Smoothness for Neural Networks Robustness Abstract For the past few years, Deep Neural Network (DNN) robustness has become a question of paramount importance. As a matter of fact, in sensitive settings misclassification can lead to dramatic consequences. Such misclassifications are likely to occur when facing adversarial attacks, hardware failures or limitations, and imperfect signal acquisition. To address this question, authors have proposed different approaches aiming at increasing the robustness of DNNs, such as adding regularizers or training using noisy examples. In this paper we propose a new regularizer built upon the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DNN architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. Since it is agnostic to the type of deformations that are expected when predicting with the DNN, the proposed regularizer can be combined with existing ad-hoc methods. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness of DNNs on classical supervised learning vision datasets. 1 Introduction Deep Neural Networks (DNNs) provide state-of-the-art performance in many challenges in machine learning (He et al., 2016; Wu et al., 2016). Their ability to achieve good generalization is often explained by the fact they use very few priors about data (LeCun et al., 2015). On the other hand, their strong dependency on data may lead to focus on biased features of the training dataset, resulting in a nonrobust classification performance. In the literature, authors have been interested in studying the robustness of DNNs in various conditions. These conditions include: • Robustness to isotropic noise, i.e., small isotropic variations of the input (Mallat, 2016), typically meaning that the network function leads to a small Lipschitz constant. • Robustness to adversarial attacks, which can exploit knowledge about the network parameters or the training dataset (Szegedy et al., 2013; Goodfellow et al., 2014). • Robustness to implementation defects, which can result in only approximately correct computations (Hubara et al., 2017). To improve DNN robustness, three main families of solutions have been proposed in the literature. The first one involves enforcing smoothness, as measured by a Lipschitz constant, in the operators and having a minimum separation margin (Mallat, 2016). A similar approach has been proposed in (Cisse et al., 2017), where the authors restrict the function of the network to be contractive. A second class of methods use intermediate representations obtained at various layers to perform the prediction (Papernot and McDaniel, 2018). Finally, in (Kurakin et al., 2016; Pezeshki et al., 2016; Madry et al., 2018), the authors propose to train the network using noisy inputs so that it better generalizes to this type of noise. This has been shown to improve the robustness of the network to the specific type of noise used during training, but it is not guaranteed that this robustness would be extended to other types of deformations. In this work, we introduce a new regularizer that does not focus on a specific type of deformation, but aims at increasing robustness in general. As such, the proposed regularizer can be combined with other existing methods. It is inspired by recent developments in Graph Signal Processing (GSP) (Shuman et al., 2013). GSP is a mathematical framework that extends classical Fourier analysis to complex topologies described by graphs, by introducing notions of frequency for signals defined on graphs. Thus, signals that are smooth on the graph (i.e., change slowly from one node to its neighbors) will have most of their energy concentrated in the low frequencies. The proposed regularizer is based on constructing a series of graphs, one for each layer of the DNN architecture, where each graph captures the similarity between all training examples given their intermediate representation at that layer. Our proposed regularizer penalizes large changes in the smoothness of class indicator vectors (viewed here as graph signals) from one layer to the next. As a consequence, the distances between pairs of examples in different classes are only allowed to change slowly from one layer to the next. Note that because we use deep architectures, the regularizer does not prevent the smoothness from achieving its maximum value, but constraining the size of changes from layer to layer increases the robustness of the network function by controlling the distance to the boundary region, as supported by experiments in Section 4. The outline of the paper is as follows. In Section 2 we present related work. In Section 3 we introduce the proposed regularizer. In Section 4 we evaluate the performance of our proposed method in various conditions and on vision benchmarks. Section 5 summarizes our conclusions. 2 Related work DNN robustness may refer to many different problems. In this work we are mostly interested in the stability to deformations (Mallat, 2016), or noise, which can be due to multiple factors mentioned in the introduction. The most studied stability to deformations is in the context of adversarial attacks. It has been shown that very small imperceptible changes on the input of a trained DNN can result in missclassification of the input (Szegedy et al., 2013; Goodfellow et al., 2014). These works have been primordial to show that DNNs may not be as robust to deformations as the test accuracy benchmarks would have lead one to believe. Other works, such as (Recht et al., 2018), have shown that DNNs may also suffer from drops in performance when facing deformations that are not originated from adversarial attacks, but simply by re-sampling the test images. Multiple ways to improve robustness have been proposed in the literature. They range from the use of a model ensemble composed of k-nearest neighbors classifiers for each layer (Papernot and McDaniel, 2018), to the use of distillation as a mean to protect the network (Papernot et al., 2016a). Other methods introduce regularizers (Gu and Rigazio, 2014), control the Lipschitz constant of the network function (Cisse et al., 2017) or implement multiple strategies revolving around using deformations as a data augmentation procedure during the training phase (Goodfellow et al., 2014; Kurakin et al., 2016; Moosavi Dezfooli et al., 2016). Compared to these works, our proposed method can be viewed as a regularizer that penalizes large deformations of the class boundaries throughout the network architecture, instead of focusing on a specific deformation of the input. As such, it can be combined with other mentioned strategies. Indeed, we demonstrate that the proposed method can be implemented in combination with (Cisse et al., 2017), resulting in a network function such that small variations to the input lead to small variations in the decision, as in (Cisse et al., 2017), while limiting the amount of change to the class boundaries. Note that our approach does not require using training data affected by a specific deformation, and our results could be further improved if such data were available for training, as shown in the Appendix. As for combining GSP and machine learning, this area has sparked interest recently. For example, the authors of (Gripon et al., 2018) show that it is possible to detect overfitting by tracking the evolution of the smoothness of a graph containing only training set examples. Another example is in (Anirudh et al., 2017) where the authors introduce different quantities related to GSP that can be used to extract interpretable results from DNNs. In (Svoboda et al., 2018) the authors exploit graph convolutional layers (Bronstein et al., 2017) to increase the robustness of the network. To the best of our knowledge, this is the first use of graph signal smoothness as a regularizer for deep neural network design. 3 Methodology 3.1 Similarity preset and postset graphs Consider a deep neural network architecture. Such a network is obtained by assembling layers of various types. Of particular interest are layers of the form x` 7→ x`+1 = h`(W`x` + b`), where h` is a nonlinear function, typically a ReLU, W` is the weight tensor at layer `, x` is the intermediate representation of the input at layer ` and b` is the corresponding bias tensor. Note that strides or pooling may be used. Assembling can be achieved in various ways: composition, concatenation, sums. . . so that we obtain a global function f that associates an input tensor x0 to an output tensor y = f(x0). When computing the output y associated with the input x0, each layer ` of the architecture processes some input x` and computes the corresponding output y` = h`(W`x` + b`). For a given layer ` and a batch of b inputs X = {x1, . . . ,xb}, we can obtain two sets X ` = {x`1, . . . ,x`b}, called the preset, and Y` = {y`1, . . . ,y`b}, called the postset. Given a similarity measure s on tensors, from a preset we can build the similarity preset matrix: M`pre[i, j] = s(x`i ,x`j),∀1 ≤ i, j ≤ b, where M[i, j] denotes the element at line i and column j in M. The postset matrix is defined similarly. Consider a similarity (either preset or postset) matrix M`. This matrix can be used to build a k-nearest neighbor similarity weighted graph G` = 〈V,A`〉, where V = {1, . . . , b} is the set of vertices and A` is the weighted adjacency matrix defined as: A`[i, j] = M `[i, j] if M`[i, j] ∈ arg maxi′ 6=j (M`[i′, j], k)⋃ arg maxj′ 6=i (M `[i, j′], k) 0 otherwise ,∀i, j ∈ V, (1) where arg maxi(ai, k) denotes the indices of the k largest elements in {a1, . . . , ab}. Note that by construction A` is symmetric. 3.2 Smoothness of label signals Given a weighted graph G` = 〈V,A`〉, we call Laplacian of G` the matrix L` = D` −A`, where D` is the diagonal matrix such that: D`[i, i] = ∑ j A `[i, j],∀i ∈ V . Because L` is symmetric and real-valued, it can be written: L` = F`Λ`F`>, (2) where F is orthonormal and contains eigenvectors of L` as columns, F> denotes the transpose of F, and Λ is diagonal and contains eigenvalues of L` is ascending order. Note that the constant vector 1 ∈ Rb is an eigenvector of L` corresponding to eigenvalue 0. Moreover, all eigenvalues of L` are nonnegative. Consequently, 1/ √ n can be chosen as the first column in F. Consider a vector s ∈ Rb, we define ŝ the Graph Fourier Transform (GFT) of s on G` as (Shuman et al., 2013): ŝ = F>s. (3) Because the order of the eigenvectors is chosen so that the corresponding eigenvalues are in ascending order, if only the first few entries of ŝ are nonzero that indicates that s is low frequency (smooth). In the extreme case where only the first entry of ŝ is nonzero we have that s is constant (maximum smoothness). More generally, smoothness σ`(s) of a signal s can be measured using the quadratic form of the Laplacian: σ`(s) = s>L`s = b∑ i,j=1 A`[i, j](s[i]− s[j])2 = b∑ i=1 Λ`[i, i]ŝ[i]2, (4) where we note that s is smoother when σ`(s) is smaller. In this paper we are particularly interested in smoothness of the label signals. We call label signal sc associated with class c a binary ({0, 1}) vector whose nonzero coordinates are the ones corresponding to input vectors of class c. In other words, sc[i] = 1 ⇔ (xi is in class c),∀1 ≤ i ≤ b. Using Equation (4), we obtain that the smoothness of the label signal sc is the sum of similarities between examples in distinct classes. Thus a smoothness of 0 means that examples in distinct classes have 0 similarity. Denote u the last layer of the architecture: yui = yi,∀i. Note that in typical settings, where outputs of the networks are one-hot-bit encoded and no regularizer is used, at the end of the learning process it is expected that y>i yj ≈ 1 if i and j belong to the same class, and y>i yj ≈ 0 otherwise. Thus, assuming that cosine similarity is used to build the graph, the last layer smoothness for all c would be σupost(sc) ≈ 0, since edge weights between nodes having different labels will be close to zero given Equation (4). More generally, smoothness of sc at the preset or postset of a given layer measures the average similarity between examples in class c and examples in other classes (σ(sc) decreases as the weights of edges connecting nodes in different classes decrease). Because the last layer can achieve σ(sc) ≈ 0, we expect the smoothness metric σ at each layer to decrease as we go deeper in the network. Next we introduce a regularization strategy that limits how much σ can decrease from one layer to the next and can even prevent the last layer from achieving σ(sc) = 0. This will be shown to improve generalization and robustness. The theoretical motivation for this choice is discussed in Section 3.4. 3.3 Proposed regularizer 3.3.1 Definition We propose to measure the deformation induced by a given layer ` in the relative positions of examples by computing the difference between label signal smoothness before and after the layer, averaged over all labels: δ`σ = ∣∣∣∣∣∑ c [ σ`post(sc)− σ`pre(sc) ]∣∣∣∣∣ . (5) These quantities are used to regularize modifications made to each of the layers during the learning process. Remark 1: Since we only consider label signals, we solely depend on the similarities between examples that belong to distinct classes. In other words, the regularizer only focuses on the boundary region, and does not vary if the distance between examples of the same label grows or shrinks. This is because forcing similarities between examples of a same class to evolve slowly could prevent the network to train appropriately. Remark 2: Compared with (Cisse et al., 2017), there are three key differences that characterize the proposed regularizer: 1. Not all pairwise distances are taken into account in the regularization; only distances between examples corresponding to different classes play a role in the regularization. 2. We allow a limited amount of both contraction and dilation of the metric space. Experimental work (e.g. (Gripon et al., 2018; Papernot and McDaniel, 2018)) has Initial problem: Class domains boundary shown that the evolution of metric spaces across DNN layers is complex, and thus restricting ourselves to contractions only could lead to lower overall performance. 3. The proposed criterion is an average (sum) over all distances, rather than a stricter criterion (e.g. Lipschitz), which would force each pair of vectors (xi,xj) to obey the constraint. Illustrative example: In Figure 1 we depict a toy illustrative example to motivate the proposed regularizer. We consider here a one-dimensional two-class problem. To linearly separate circles and crosses, it is necessary to group all circles. Without regularization (setting i)), the resulting embedding is likely to increase considerably the distance between examples and the size of the boundary region between classes. In contrast, by penalizing large variations of the smoothness of label signals (setting ii)), the average distance between circles and crosses must be preserved in the embedding domain, resulting in a more precise control of distances within the boundary region. 3.4 Motivation: label signal bandwidth and powers of the Laplacian Recent work (Anis et al., 2017) develops an asymptotic analysis of the bandwidth of label signals, BW (s), where bandwidth is defined as the highest non-zero graph frequency of s, i.e., the nonzero entry of ŝ with the highest index. An estimate of the bandwidth can be obtained by computing: BWm(s) = ( s>Lms s>s )(1/m) (6) for large m. This can be viewed as a generalization of the smoothness metric of (4). (Anis et al., 2017) shows that, as the number of labeled points x (assumed drawn from a distribution p(x)) grows asymptotically, the bandwidth of the label signal converges in probability to the supremum of p(x) in the region of overlap between classes. This motivates our work in three ways. First, it provides theoretical justification to use σ`(s) for regularization, since lower values of σ`(s) are indicative of better separation between classes. Second, the asymptotic analysis suggests that using higher powers of the Laplacian would lead to better regularization, since estimating bandwidth using BWm(s) becomes increasingly accurate as m increases. Finally, this regularization can be seen to be protective against specializing by preventing σ`(s) from decreasing “too fast”. For most problems of interest, given a sufficiently large amount of labeled data available, it would be reasonable to expect the bandwidth of s not to be arbitrarily small, because the classes cannot be exactly separated, and thus a network that reduces the bandwidth too much can result in being biased by the training set. 3.5 Analysis of the Laplacian powers In Figure 2 we depict the Laplacian and squared Laplacian of similarity graphs obtained at different layers in a trained vanilla architecture. On the deep layers, we can clearly see blocks corresponding to the classes, while the situation in the middle layer is not as clear. This figure illustrates how using the squared Laplacian helps modifying the distances to improve separation. Note that we normalize the squared Laplacian values by dividing them by the highest absolute value. In Figure 3, we plot the average evolution of smoothness of label signals over 100 batches, as a function of layer depth in the architecture, and for different choices of the regularizer. In the left part, we look at smoothness measures using the Laplacian. In the right part, we use the squared Laplacian. We can clearly see the effectiveness of the regularizer in enforcing small variations of smoothness across the architecture. Note that for model regularized with L2, changes in smoothness measured by L are not easy to see. This seems to suggest that some of the gains achieved via L2 regularization come in making changes that would be “invisible” when looking at the layers from the perspective of L smoothness. The same normalization from Figure 2 is used for L2. 4 Experiments In the following paragraphs we evaluate the proposed method using various tests. We use the well known CIFAR-10 (Krizhevsky and Hinton, 2009) dataset made of tiny images. As far as the DNN is concerned, we use the same PreActResNet (He et al., 2016) architecture for all tests, with 18 layers. All inputs, including those on the test set, are normalized based on the mean and standard deviation of the images of the training set. In all figures, P are SNR ≈ ∞ SNR ≈ 20 SNR ≈ 15 Parseval trained networks, R are networks trained with the proposed regularizer and V are vanilla networks. More details and experiments can be found at the Appendix. We depict the obtained results using box plots where data is aggregated from 10 different networks corresponding to different random seeds and batch orders. In the first experiment (left most plot) in Figure 4, we plot the baseline accuracy of the models on the clean test set (no deformation is added at this point). These experiments agree with the claim from (Cisse et al., 2017) where the authors show that they are able to increase the performance of the network on the clean test set. We observe that our proposed method leads to a minor decrease of performance on this test. However, we see in the following experiments that this is mitigated with increased robustness to deformations. Such a trade-off between robustness and accuracy has already been discussed in the literature (Fawzi et al., 2018). 4.1 Isotropic deformation In this scenario we evaluate the robustness of the network function to small isotropic variations of the input. We generate 40 different deformations using random variables N (0, 0.25) which are added to the test set inputs. Note that they are scaled so that SNR ≈ 15 and SNR ≈ 20. The middle and right-most plots from Figure 4 show that the proposed method increases the robustness of the network to isotropic deformations. Note that in both scenarios the best results are achieved by combining Parseval training and our proposed method (lower-most box on both figures). 4.2 Adversarial Robustness We next evaluate robustness to adversarial inputs, which are specifically built to fool the network function. Such adversarial inputs can be generated and evaluated in multiple ways. Here we implement two approaches: first a mean case of adversarial noise, where the adversary can only use one forward and one backward pass to generate the deformations, and second a worst case scenario, where the adversary can use multiple forward and backward passes to try to find the smallest deformation that will fool the network. For the first approach, we add the scaled gradient sign (FGSM attack) on the input (Kurakin et al., 2016), so that we obtain a target SNR = 33. Results are depicted in the left and center plots of Figure 5. In the left plot the noise is added after normalizing the input whereas on the middle plot it is added before normalizing. As in the isotropic noise case, a combination of the Parseval method and our proposed approach achieves maximum robustness. In regards to the second approach, where a worst case scenario is considered, we use the Foolbox toolbox (Rauber et al., 2017) implementation of DeepFool (Moosavi Dezfooli et al., 2016). Due to time constraints we sample only 110 of the test set images for this test. The conclusions are similar (right plot of Figure 5) to those obtained for the first adversarial attack approach. 4.3 Implementation robustness Finally, in a third series of experiments we evaluate the robustness of the network functions to faulty implementations. As a result, approximate computations are made during the test phase that consist of random erasures of the memory (dropout) or quantization of the weights (Hubara et al., 2017). In the dropout case, we compute the test set accuracy when the network has a probability of either 25% or 40% of dropping a neuron’s value after each block. We run each experiment 40 times. The results are depicted in the left and center plots of Figure 6. It is interesting to note that the Parseval trained functions seem to drop in performance as soon as we reach 40% probability of dropout, providing an average accuracy smaller than the vanilla networks. In contrast, the proposed method is the most robust to these perturbations. For the quantization of the weights, we consider a scenario where the network size in memory has to be shrink 6 times. We therefore quantize the weights of the networks to 5 bits (instead of 32) and re-evaluate the test set accuracy. The right plot of Figure 6 shows that the proposed method is providing a better robustness to this kind of deformation than the tested counterparts. 5 Conclusion In this paper we have introduced a new regularizer that enforces small variations of the smoothness of label signals on similarity graphs obtained at intermediate layers of a deep neural network architecture. We have empirically shown with our tests that it can lead to improved robustness in various conditions compared to existing counterparts. We also demonstrated that combining the proposed regularizer with existing methods can result in even better robustness for some conditions. Future work includes a more systematic study of the effectiveness of the method with regards to other datasets, models and deformations. Recent works shown adversarial noise is partially transferable between models and dataset (Moosavi-Dezfooli et al., 2017; Papernot et al., 2016b) and therefore we are confident about the generality of the method in terms of models and datasets. One possible extension of the proposed method is to use it in a fine-tuning stage, combined with different techniques already established on the literature. An extension using a combination of input barycenter and class barycenter signals instead of the class signal could be interesting as that would be comparable to (Zhang et al., 2017). In the same vein, using random signals could be beneficial for semi-supervised or unsupervised learning challenges. A Parseval Training and implementation We compare our results with those obtained using the method described in (Cisse et al., 2017). There are three modifications to the normal training procedure: orthogonality constraint, convolutional renormalization and convexity constraint. For the orthogonality constraint we enforce Parseval tightness (Kovačević and Chebira, 2008) as a layer-wise regularizer: Rβ(W `) = β 2 ‖W `>W ` − I‖22, (7) where W` is the weight tensor at layer `. This function can be approximately optimized with gradient descent by doing the operation: W ` ← (1 + β)W ` − βW `W `>W `. (8) Given that our network is smaller we can apply the optimization to the entirety of the W , instead of 30% as per the original paper, this increases the strength of the Parseval tightness. For the convolutional renormalization, each matrixW ` is reparametrized before being applied to the convolution as W ` √ 2ks+1 , where ks is the kernel size. For our architecture the inputs from a layer come from either one or two different layers. In the case where the inputs come from only one layer, α the convexity constraint parameter is set to 1. When the inputs come from the sum of two layers we use α = 0.5 as the value for both of them, which constraints our Lipschitz constant, this is softer than the convexity constraint from the original paper. B Hyperparameters We train our networks using classical stochastic gradient descent with momentum (0.9), with batch size of b = 100 images and using a L2-norm weight decay with a coefficient of λ = 0.0005. We do a 100 epoch training. Our learning rate starts at 0.1. After half of the training (50 epochs) the learning rate decreases to 0.001. We use the mean of the difference of smoothness between successive layers in our loss function. Therefore in our loss function we have: L = CategoricalCrossEntropy + λWeightDecay + γ∆ (9) where ∆ = 1d−1 ∑d `=1 |δ`σ|. We perform experiments using various powers of the Laplacian m = 1, 2, 3, in which case the scaling coefficient γ is put to the same power as the Laplacian. We tested multiple parameters of β, the Parseval tightness parameter, γ the weight for the smoothness difference cost and m the power of the Laplacian. We found that the best values for this specific architecture, dataset and training scheme were: β = 0.01, γ = 0.01,m = 2, k = b. C Depiction of the network Figure 7 depicts the network used on all experiments of sections 3 and 4. f = 64 is the filter size of the first layer of the network. Conv layers are 3x3 layers and are always preceded by batch normalization and relu (except for the first layer which receives just the input). The smoothness gaps are calculated after each ReLU. D Additional experiments Given suggestions from the reviewers, we performed additional experiments to further demonstrate the capabilities of the proposed regularizer. Due to the lack of space they could not be added to the main paper. We consider the effects of the regularizer when applied on another datasets. We also consider the effects of adding adversarial data augmentation methods while minimizing the amount of other influencing factors. We first look at the results when using the same architecture as for the CIFAR-10 dataset, which inevitably results in far from state-of-the-art accuracy on CIFAR-100. Then, we perform experiments using a different architecture (namely WideResnet 28-10, with dropout) for CIFAR-100. D.1 CIFAR-10 We add two types of tests for the CIFAR-10 dataset: adversarial data augmentation during training and black-box FGSM. D.1.1 Tests with FGSM adversarial data augmentation In this section we consider tests adding adversarial data augmentation as suggested in (Kurakin et al., 2016). To be more precise we use the method they advise which is called "step1.1" using = 8255 . The results presented in the figures below are obtained by running 10 experiments with random initializations. We first perform the same tests as in Section 4. As expected, we observe in Figure 8 that training with adversarial examples help in the case of Gaussian noise, as it adds more variation to the training set, while reducing the accuracy on the clean set. Note that combining our method with adversarial training results in the best median accuracy. Combining the three methods is less successful than expected, which could indicate that a better hyperparameter search would be needed. Considering adversarial robustness, the obtained results are depicted in Figure 9. We observe that adding FGSM adversarial training does not generalize well to other types of attack (which is readily seen in the literature Madry et al. (2018)). Overall, the models using the proposed regularizer are the most robust. Finally, when considering implementation related perturbations, the results depicted in Figure 10 are consistent with the ones from Section 4.3, in which is shown that the proposed regularizer helps improving robustness. In summary, even when adding adversarial training, the proposed regularizer is either the most robust in median, or capable of improving the robustness when used combined with the other methods. D.1.2 Tests with black box FGSM To further verify that the obtained results are not only due to gradient masking, we perform tests with black box FGSM, where the target attacked network is not the same as the source of the adversarial noise. For this test we set the SNR of FGSM to 33. We chose the network with the best performance for each of the tested methods. The results are depicted in Table 1. In our experiments, we found that the combination of our method with Parseval is the most robust to noise coming from other sources, while the noise created by both Parseval and our method did not generalize as well as the one created by Vanilla. This demonstrates that the improvements are not caused by gradient masking, but are caused by the increased robustness of the proposed method and Parseval’s. D.1.3 Tests with PGD adversarial data augmentation Most of our adversarial tests are performed with FGSM because of its simplicity and speed, even though it has already been shown (e.g: Madry et al. (2018)) that FGSM is weak as an attack and as a defense mechanism. Despite the fact we do not only target adversarial defense, we further stress the ability of the proposed regularizer to improve it and to combine with other methods. To this end we perform experiments against the PGD (Projected Gradient Descent) attack. PGD is an iterative version of FGSM, which run for a maximum number of iterations it or until convergence. For each iteration it moves by a distance of step in the direction of the gradient provided it does not go at a distance greater than from the original image. Our experiments show that the proposed regularizer increases robustness against a weak PGD attack (similar epsilon as our FGSM with SNR=33), but it is almost completely defeated by the PGD with the parameters from (Madry et al., 2018). The results are depicted in table 2. We also show that, as expected, FGSM training does not add significant robustness against the stronger PGD attack. As the proposed regularizer can be combined with FGSM defense, it is natural to also test it alongside PGD training. We use the parameters advised in (Madry et al., 2018): 7 iterations with step = 2/255, and = 8/255. The results depicted in Table 3 show that using our regularizer increases robustness of networks trained with PGD. Note that Dropout and Gaussian Noise were applied ten times to each of the networks and the results are displayed as the mean test set accuracy under these perturbations. A rate of 40% was used for dropout. The PGD attack uses the following parameters: it = 20, step = 2255 , = 8 255 . We test the generality of the method using the CIFAR-100 dataset. Results are shown in Table 4 as the mean over three different initializations. Dropout and Gaussian Noise are applied ten times to each of the networks for a total of 30 different runs. An SNR of 33 is used for FGSM, and a rate of 25% is used for dropout. Images are normalized in the same way as the experiments with CIFAR-10. Due to time constraints we sample only 110 of the images from the test set for the Deep Fool test. The proposed regularizer is the most robust on all categories, while Parseval has problems with the perturbations, despite yielding the best accuracy on the clean test set. The combination of the proposed regularizer and the parseval training method is not able to reproduce the good results from the CIFAR-10 dataset. The results shown in Table 4 are obtained using an architecture that is not performing very well on the clean test set for the CIFAR-100 dataset. We thus performed additional experiments using the WideResNet 28-10 (Zagoruyko and Komodakis, 2016) architecture, and we added standard data augmentation (random crops and random horizontal flipping) and dropout with probability of 30% after the first convolution of each residual block. We train for 200 epochs, starting with a learning rate of 0.1 and divide the learning rate by 5 in epochs 60, 120 and 160. Momentum of 0.9 is used and weight decay of 5e-4. We use the value from the Parseval paper (β = 0.0003) as in this case it provided better results than the one described in Section B. Results on the WideResNet 28-10 architecture using data augmentation are shown in Table 5. We observe that the proposed method (sometimes with combinations with other methods) is still the most robust. E Impact of the proposed regularizer on the boundary We look at the impact of the proposed regularizer on the boundary region. To this end, we choose 10 pairs of points in distinct classes that are the most similar (i.e. their distance is minimal) in the input space and we look at the decision of the network function along the segment between them. The average is depicted in Figure 11. Note that the point to the left is always chosen to be the one corresponding to the decision of the network at the middle of the segment, so that the average curve is asymmetric. Interestingly, we observe that the proposed regularizer is the one for which the boundary is closest to the middle of the segments, thus proving our claim that the proposed regularizer control the boundary region. F Regularizer pseudo-code Below in Algorithm 1 we describe how we use the proposed regularizer to compute the loss as a pseudo-code. This function receives five inputs: 1. listactivations: the list of the intermediate features right after each call of the ReLU activation function of the network. We call these intermediate features activations` where ` represents the depth of the network; 2. y: the output of the network; 3. s: the label signal of the batch. Otherwise said, the ground truth labels of the examples of the batch; 4. m: the power of the Laplacian for which we wish to compute the smoothness; 5. γ: the scaling coefficient of the regularizer loss. Algorithm 1: Loss function of the regularized network 1: procedure Smoothness(activations`, s,m) 2: A` ← Pairwise cosine similarity of activations` 3: D` ← Diagonal degree matrix of A` 4: L` ← D` −A` 5: σ` ← Trace(sᵀ(L`)ms) 6: return σ` 7: procedure Loss(listactivations,y, s,m, γ) 8: for activations` ∈ listactivations do σ` ← Smoothness(activations`, s,m) 9: ∆← ∑`max i=1 |σ i−σi−1| `max−1 10: return CategoricalCrossEntropy(s,y) + γm∆
1. What is the main contribution of the paper regarding the use of graph regularization for neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its practicality and algorithmic perspective? 3. What are the weaknesses of the paper, especially regarding the experimental results and the relationship between the proposed regularization and robustness to adversarial examples? 4. How does the reviewer assess the significance of the results, particularly for "implementation robustness"? 5. What are some additional questions or comments regarding the paper's content, such as the potential impact of constructing subgraphs on minibatches only or the need for further experiments to support the method's effectiveness in preventing overfitting?
Review
Review The paper proposes to use a regularization which preserves nearest-neighbor smoothness from layer to layer. The approach is based on controlling the extent to which examples from different classes are separated from one layer to the next, in deep neural networks. The criterion computes the smoothness of the label vectors (one-hot encodings of class labels) along the nearest-neighbor graph constructed from the euclidian distances on a given layer's activations. From an algorithmic perspective, the regularization is applied by considering distances graphs on minibatches. Experiments on CIFAR-10 show that the method improves the robustness of the neural networks to different types of perturbations (perturbations of the input, aka adversarial examples, and quantization of the network weights/dropout0. The main contribution of the article is to apply concepts of graph regularization to the robustness of neural networks. The experimental evaluation is solid but the significance is unclear (error bars have rather large intersections), and there is a single dataset. While the overall concept of graph regularization is appealing, the exact relationship between the proposed regularization and robustness to adversarial examples is unclear. There does not seem to be any proof that adersarial examples are supposed to be classified better by keeping the smoothness of class indicators similar from layer to layer. Section 3.4 seem to motivate the use of the smoothness from the perspective of preventing overfitting. However, I'm not sure how adversarial examples and the other forms of perturbations considered in the experiments (e.g., weight quantization) are related to overfitting. strengths: - practical proposal to use graph regularization for neural network regularization - the proposal to construct graphs based on the current batch makes sense from an algorithmic point of view cons: experimental results are a bit weak -- the most significant results seem to be obtained for "implementation robustness", but it is unclear why the proposed approach should be particularly good for this setting since the theoretical motivation is to prevent overfitting. The results vs Parseval regularization and the indications that the metohd works well with Parseval regularization is a plus, but the differences on adversarial examples are tiny. other questions/comments: - how much is lost by constructing subgraphs on minibatches only? - are there experiments (e.g., on smaller datasets) that would show that the proposed method indeed regularizes and prevents overfitting as motivated in Section 3.4?
ICLR
Title Laplacian Networks: Bounding Indicator Function Smoothness for Neural Networks Robustness Abstract For the past few years, Deep Neural Network (DNN) robustness has become a question of paramount importance. As a matter of fact, in sensitive settings misclassification can lead to dramatic consequences. Such misclassifications are likely to occur when facing adversarial attacks, hardware failures or limitations, and imperfect signal acquisition. To address this question, authors have proposed different approaches aiming at increasing the robustness of DNNs, such as adding regularizers or training using noisy examples. In this paper we propose a new regularizer built upon the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DNN architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. Since it is agnostic to the type of deformations that are expected when predicting with the DNN, the proposed regularizer can be combined with existing ad-hoc methods. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness of DNNs on classical supervised learning vision datasets. 1 Introduction Deep Neural Networks (DNNs) provide state-of-the-art performance in many challenges in machine learning (He et al., 2016; Wu et al., 2016). Their ability to achieve good generalization is often explained by the fact they use very few priors about data (LeCun et al., 2015). On the other hand, their strong dependency on data may lead to focus on biased features of the training dataset, resulting in a nonrobust classification performance. In the literature, authors have been interested in studying the robustness of DNNs in various conditions. These conditions include: • Robustness to isotropic noise, i.e., small isotropic variations of the input (Mallat, 2016), typically meaning that the network function leads to a small Lipschitz constant. • Robustness to adversarial attacks, which can exploit knowledge about the network parameters or the training dataset (Szegedy et al., 2013; Goodfellow et al., 2014). • Robustness to implementation defects, which can result in only approximately correct computations (Hubara et al., 2017). To improve DNN robustness, three main families of solutions have been proposed in the literature. The first one involves enforcing smoothness, as measured by a Lipschitz constant, in the operators and having a minimum separation margin (Mallat, 2016). A similar approach has been proposed in (Cisse et al., 2017), where the authors restrict the function of the network to be contractive. A second class of methods use intermediate representations obtained at various layers to perform the prediction (Papernot and McDaniel, 2018). Finally, in (Kurakin et al., 2016; Pezeshki et al., 2016; Madry et al., 2018), the authors propose to train the network using noisy inputs so that it better generalizes to this type of noise. This has been shown to improve the robustness of the network to the specific type of noise used during training, but it is not guaranteed that this robustness would be extended to other types of deformations. In this work, we introduce a new regularizer that does not focus on a specific type of deformation, but aims at increasing robustness in general. As such, the proposed regularizer can be combined with other existing methods. It is inspired by recent developments in Graph Signal Processing (GSP) (Shuman et al., 2013). GSP is a mathematical framework that extends classical Fourier analysis to complex topologies described by graphs, by introducing notions of frequency for signals defined on graphs. Thus, signals that are smooth on the graph (i.e., change slowly from one node to its neighbors) will have most of their energy concentrated in the low frequencies. The proposed regularizer is based on constructing a series of graphs, one for each layer of the DNN architecture, where each graph captures the similarity between all training examples given their intermediate representation at that layer. Our proposed regularizer penalizes large changes in the smoothness of class indicator vectors (viewed here as graph signals) from one layer to the next. As a consequence, the distances between pairs of examples in different classes are only allowed to change slowly from one layer to the next. Note that because we use deep architectures, the regularizer does not prevent the smoothness from achieving its maximum value, but constraining the size of changes from layer to layer increases the robustness of the network function by controlling the distance to the boundary region, as supported by experiments in Section 4. The outline of the paper is as follows. In Section 2 we present related work. In Section 3 we introduce the proposed regularizer. In Section 4 we evaluate the performance of our proposed method in various conditions and on vision benchmarks. Section 5 summarizes our conclusions. 2 Related work DNN robustness may refer to many different problems. In this work we are mostly interested in the stability to deformations (Mallat, 2016), or noise, which can be due to multiple factors mentioned in the introduction. The most studied stability to deformations is in the context of adversarial attacks. It has been shown that very small imperceptible changes on the input of a trained DNN can result in missclassification of the input (Szegedy et al., 2013; Goodfellow et al., 2014). These works have been primordial to show that DNNs may not be as robust to deformations as the test accuracy benchmarks would have lead one to believe. Other works, such as (Recht et al., 2018), have shown that DNNs may also suffer from drops in performance when facing deformations that are not originated from adversarial attacks, but simply by re-sampling the test images. Multiple ways to improve robustness have been proposed in the literature. They range from the use of a model ensemble composed of k-nearest neighbors classifiers for each layer (Papernot and McDaniel, 2018), to the use of distillation as a mean to protect the network (Papernot et al., 2016a). Other methods introduce regularizers (Gu and Rigazio, 2014), control the Lipschitz constant of the network function (Cisse et al., 2017) or implement multiple strategies revolving around using deformations as a data augmentation procedure during the training phase (Goodfellow et al., 2014; Kurakin et al., 2016; Moosavi Dezfooli et al., 2016). Compared to these works, our proposed method can be viewed as a regularizer that penalizes large deformations of the class boundaries throughout the network architecture, instead of focusing on a specific deformation of the input. As such, it can be combined with other mentioned strategies. Indeed, we demonstrate that the proposed method can be implemented in combination with (Cisse et al., 2017), resulting in a network function such that small variations to the input lead to small variations in the decision, as in (Cisse et al., 2017), while limiting the amount of change to the class boundaries. Note that our approach does not require using training data affected by a specific deformation, and our results could be further improved if such data were available for training, as shown in the Appendix. As for combining GSP and machine learning, this area has sparked interest recently. For example, the authors of (Gripon et al., 2018) show that it is possible to detect overfitting by tracking the evolution of the smoothness of a graph containing only training set examples. Another example is in (Anirudh et al., 2017) where the authors introduce different quantities related to GSP that can be used to extract interpretable results from DNNs. In (Svoboda et al., 2018) the authors exploit graph convolutional layers (Bronstein et al., 2017) to increase the robustness of the network. To the best of our knowledge, this is the first use of graph signal smoothness as a regularizer for deep neural network design. 3 Methodology 3.1 Similarity preset and postset graphs Consider a deep neural network architecture. Such a network is obtained by assembling layers of various types. Of particular interest are layers of the form x` 7→ x`+1 = h`(W`x` + b`), where h` is a nonlinear function, typically a ReLU, W` is the weight tensor at layer `, x` is the intermediate representation of the input at layer ` and b` is the corresponding bias tensor. Note that strides or pooling may be used. Assembling can be achieved in various ways: composition, concatenation, sums. . . so that we obtain a global function f that associates an input tensor x0 to an output tensor y = f(x0). When computing the output y associated with the input x0, each layer ` of the architecture processes some input x` and computes the corresponding output y` = h`(W`x` + b`). For a given layer ` and a batch of b inputs X = {x1, . . . ,xb}, we can obtain two sets X ` = {x`1, . . . ,x`b}, called the preset, and Y` = {y`1, . . . ,y`b}, called the postset. Given a similarity measure s on tensors, from a preset we can build the similarity preset matrix: M`pre[i, j] = s(x`i ,x`j),∀1 ≤ i, j ≤ b, where M[i, j] denotes the element at line i and column j in M. The postset matrix is defined similarly. Consider a similarity (either preset or postset) matrix M`. This matrix can be used to build a k-nearest neighbor similarity weighted graph G` = 〈V,A`〉, where V = {1, . . . , b} is the set of vertices and A` is the weighted adjacency matrix defined as: A`[i, j] = M `[i, j] if M`[i, j] ∈ arg maxi′ 6=j (M`[i′, j], k)⋃ arg maxj′ 6=i (M `[i, j′], k) 0 otherwise ,∀i, j ∈ V, (1) where arg maxi(ai, k) denotes the indices of the k largest elements in {a1, . . . , ab}. Note that by construction A` is symmetric. 3.2 Smoothness of label signals Given a weighted graph G` = 〈V,A`〉, we call Laplacian of G` the matrix L` = D` −A`, where D` is the diagonal matrix such that: D`[i, i] = ∑ j A `[i, j],∀i ∈ V . Because L` is symmetric and real-valued, it can be written: L` = F`Λ`F`>, (2) where F is orthonormal and contains eigenvectors of L` as columns, F> denotes the transpose of F, and Λ is diagonal and contains eigenvalues of L` is ascending order. Note that the constant vector 1 ∈ Rb is an eigenvector of L` corresponding to eigenvalue 0. Moreover, all eigenvalues of L` are nonnegative. Consequently, 1/ √ n can be chosen as the first column in F. Consider a vector s ∈ Rb, we define ŝ the Graph Fourier Transform (GFT) of s on G` as (Shuman et al., 2013): ŝ = F>s. (3) Because the order of the eigenvectors is chosen so that the corresponding eigenvalues are in ascending order, if only the first few entries of ŝ are nonzero that indicates that s is low frequency (smooth). In the extreme case where only the first entry of ŝ is nonzero we have that s is constant (maximum smoothness). More generally, smoothness σ`(s) of a signal s can be measured using the quadratic form of the Laplacian: σ`(s) = s>L`s = b∑ i,j=1 A`[i, j](s[i]− s[j])2 = b∑ i=1 Λ`[i, i]ŝ[i]2, (4) where we note that s is smoother when σ`(s) is smaller. In this paper we are particularly interested in smoothness of the label signals. We call label signal sc associated with class c a binary ({0, 1}) vector whose nonzero coordinates are the ones corresponding to input vectors of class c. In other words, sc[i] = 1 ⇔ (xi is in class c),∀1 ≤ i ≤ b. Using Equation (4), we obtain that the smoothness of the label signal sc is the sum of similarities between examples in distinct classes. Thus a smoothness of 0 means that examples in distinct classes have 0 similarity. Denote u the last layer of the architecture: yui = yi,∀i. Note that in typical settings, where outputs of the networks are one-hot-bit encoded and no regularizer is used, at the end of the learning process it is expected that y>i yj ≈ 1 if i and j belong to the same class, and y>i yj ≈ 0 otherwise. Thus, assuming that cosine similarity is used to build the graph, the last layer smoothness for all c would be σupost(sc) ≈ 0, since edge weights between nodes having different labels will be close to zero given Equation (4). More generally, smoothness of sc at the preset or postset of a given layer measures the average similarity between examples in class c and examples in other classes (σ(sc) decreases as the weights of edges connecting nodes in different classes decrease). Because the last layer can achieve σ(sc) ≈ 0, we expect the smoothness metric σ at each layer to decrease as we go deeper in the network. Next we introduce a regularization strategy that limits how much σ can decrease from one layer to the next and can even prevent the last layer from achieving σ(sc) = 0. This will be shown to improve generalization and robustness. The theoretical motivation for this choice is discussed in Section 3.4. 3.3 Proposed regularizer 3.3.1 Definition We propose to measure the deformation induced by a given layer ` in the relative positions of examples by computing the difference between label signal smoothness before and after the layer, averaged over all labels: δ`σ = ∣∣∣∣∣∑ c [ σ`post(sc)− σ`pre(sc) ]∣∣∣∣∣ . (5) These quantities are used to regularize modifications made to each of the layers during the learning process. Remark 1: Since we only consider label signals, we solely depend on the similarities between examples that belong to distinct classes. In other words, the regularizer only focuses on the boundary region, and does not vary if the distance between examples of the same label grows or shrinks. This is because forcing similarities between examples of a same class to evolve slowly could prevent the network to train appropriately. Remark 2: Compared with (Cisse et al., 2017), there are three key differences that characterize the proposed regularizer: 1. Not all pairwise distances are taken into account in the regularization; only distances between examples corresponding to different classes play a role in the regularization. 2. We allow a limited amount of both contraction and dilation of the metric space. Experimental work (e.g. (Gripon et al., 2018; Papernot and McDaniel, 2018)) has Initial problem: Class domains boundary shown that the evolution of metric spaces across DNN layers is complex, and thus restricting ourselves to contractions only could lead to lower overall performance. 3. The proposed criterion is an average (sum) over all distances, rather than a stricter criterion (e.g. Lipschitz), which would force each pair of vectors (xi,xj) to obey the constraint. Illustrative example: In Figure 1 we depict a toy illustrative example to motivate the proposed regularizer. We consider here a one-dimensional two-class problem. To linearly separate circles and crosses, it is necessary to group all circles. Without regularization (setting i)), the resulting embedding is likely to increase considerably the distance between examples and the size of the boundary region between classes. In contrast, by penalizing large variations of the smoothness of label signals (setting ii)), the average distance between circles and crosses must be preserved in the embedding domain, resulting in a more precise control of distances within the boundary region. 3.4 Motivation: label signal bandwidth and powers of the Laplacian Recent work (Anis et al., 2017) develops an asymptotic analysis of the bandwidth of label signals, BW (s), where bandwidth is defined as the highest non-zero graph frequency of s, i.e., the nonzero entry of ŝ with the highest index. An estimate of the bandwidth can be obtained by computing: BWm(s) = ( s>Lms s>s )(1/m) (6) for large m. This can be viewed as a generalization of the smoothness metric of (4). (Anis et al., 2017) shows that, as the number of labeled points x (assumed drawn from a distribution p(x)) grows asymptotically, the bandwidth of the label signal converges in probability to the supremum of p(x) in the region of overlap between classes. This motivates our work in three ways. First, it provides theoretical justification to use σ`(s) for regularization, since lower values of σ`(s) are indicative of better separation between classes. Second, the asymptotic analysis suggests that using higher powers of the Laplacian would lead to better regularization, since estimating bandwidth using BWm(s) becomes increasingly accurate as m increases. Finally, this regularization can be seen to be protective against specializing by preventing σ`(s) from decreasing “too fast”. For most problems of interest, given a sufficiently large amount of labeled data available, it would be reasonable to expect the bandwidth of s not to be arbitrarily small, because the classes cannot be exactly separated, and thus a network that reduces the bandwidth too much can result in being biased by the training set. 3.5 Analysis of the Laplacian powers In Figure 2 we depict the Laplacian and squared Laplacian of similarity graphs obtained at different layers in a trained vanilla architecture. On the deep layers, we can clearly see blocks corresponding to the classes, while the situation in the middle layer is not as clear. This figure illustrates how using the squared Laplacian helps modifying the distances to improve separation. Note that we normalize the squared Laplacian values by dividing them by the highest absolute value. In Figure 3, we plot the average evolution of smoothness of label signals over 100 batches, as a function of layer depth in the architecture, and for different choices of the regularizer. In the left part, we look at smoothness measures using the Laplacian. In the right part, we use the squared Laplacian. We can clearly see the effectiveness of the regularizer in enforcing small variations of smoothness across the architecture. Note that for model regularized with L2, changes in smoothness measured by L are not easy to see. This seems to suggest that some of the gains achieved via L2 regularization come in making changes that would be “invisible” when looking at the layers from the perspective of L smoothness. The same normalization from Figure 2 is used for L2. 4 Experiments In the following paragraphs we evaluate the proposed method using various tests. We use the well known CIFAR-10 (Krizhevsky and Hinton, 2009) dataset made of tiny images. As far as the DNN is concerned, we use the same PreActResNet (He et al., 2016) architecture for all tests, with 18 layers. All inputs, including those on the test set, are normalized based on the mean and standard deviation of the images of the training set. In all figures, P are SNR ≈ ∞ SNR ≈ 20 SNR ≈ 15 Parseval trained networks, R are networks trained with the proposed regularizer and V are vanilla networks. More details and experiments can be found at the Appendix. We depict the obtained results using box plots where data is aggregated from 10 different networks corresponding to different random seeds and batch orders. In the first experiment (left most plot) in Figure 4, we plot the baseline accuracy of the models on the clean test set (no deformation is added at this point). These experiments agree with the claim from (Cisse et al., 2017) where the authors show that they are able to increase the performance of the network on the clean test set. We observe that our proposed method leads to a minor decrease of performance on this test. However, we see in the following experiments that this is mitigated with increased robustness to deformations. Such a trade-off between robustness and accuracy has already been discussed in the literature (Fawzi et al., 2018). 4.1 Isotropic deformation In this scenario we evaluate the robustness of the network function to small isotropic variations of the input. We generate 40 different deformations using random variables N (0, 0.25) which are added to the test set inputs. Note that they are scaled so that SNR ≈ 15 and SNR ≈ 20. The middle and right-most plots from Figure 4 show that the proposed method increases the robustness of the network to isotropic deformations. Note that in both scenarios the best results are achieved by combining Parseval training and our proposed method (lower-most box on both figures). 4.2 Adversarial Robustness We next evaluate robustness to adversarial inputs, which are specifically built to fool the network function. Such adversarial inputs can be generated and evaluated in multiple ways. Here we implement two approaches: first a mean case of adversarial noise, where the adversary can only use one forward and one backward pass to generate the deformations, and second a worst case scenario, where the adversary can use multiple forward and backward passes to try to find the smallest deformation that will fool the network. For the first approach, we add the scaled gradient sign (FGSM attack) on the input (Kurakin et al., 2016), so that we obtain a target SNR = 33. Results are depicted in the left and center plots of Figure 5. In the left plot the noise is added after normalizing the input whereas on the middle plot it is added before normalizing. As in the isotropic noise case, a combination of the Parseval method and our proposed approach achieves maximum robustness. In regards to the second approach, where a worst case scenario is considered, we use the Foolbox toolbox (Rauber et al., 2017) implementation of DeepFool (Moosavi Dezfooli et al., 2016). Due to time constraints we sample only 110 of the test set images for this test. The conclusions are similar (right plot of Figure 5) to those obtained for the first adversarial attack approach. 4.3 Implementation robustness Finally, in a third series of experiments we evaluate the robustness of the network functions to faulty implementations. As a result, approximate computations are made during the test phase that consist of random erasures of the memory (dropout) or quantization of the weights (Hubara et al., 2017). In the dropout case, we compute the test set accuracy when the network has a probability of either 25% or 40% of dropping a neuron’s value after each block. We run each experiment 40 times. The results are depicted in the left and center plots of Figure 6. It is interesting to note that the Parseval trained functions seem to drop in performance as soon as we reach 40% probability of dropout, providing an average accuracy smaller than the vanilla networks. In contrast, the proposed method is the most robust to these perturbations. For the quantization of the weights, we consider a scenario where the network size in memory has to be shrink 6 times. We therefore quantize the weights of the networks to 5 bits (instead of 32) and re-evaluate the test set accuracy. The right plot of Figure 6 shows that the proposed method is providing a better robustness to this kind of deformation than the tested counterparts. 5 Conclusion In this paper we have introduced a new regularizer that enforces small variations of the smoothness of label signals on similarity graphs obtained at intermediate layers of a deep neural network architecture. We have empirically shown with our tests that it can lead to improved robustness in various conditions compared to existing counterparts. We also demonstrated that combining the proposed regularizer with existing methods can result in even better robustness for some conditions. Future work includes a more systematic study of the effectiveness of the method with regards to other datasets, models and deformations. Recent works shown adversarial noise is partially transferable between models and dataset (Moosavi-Dezfooli et al., 2017; Papernot et al., 2016b) and therefore we are confident about the generality of the method in terms of models and datasets. One possible extension of the proposed method is to use it in a fine-tuning stage, combined with different techniques already established on the literature. An extension using a combination of input barycenter and class barycenter signals instead of the class signal could be interesting as that would be comparable to (Zhang et al., 2017). In the same vein, using random signals could be beneficial for semi-supervised or unsupervised learning challenges. A Parseval Training and implementation We compare our results with those obtained using the method described in (Cisse et al., 2017). There are three modifications to the normal training procedure: orthogonality constraint, convolutional renormalization and convexity constraint. For the orthogonality constraint we enforce Parseval tightness (Kovačević and Chebira, 2008) as a layer-wise regularizer: Rβ(W `) = β 2 ‖W `>W ` − I‖22, (7) where W` is the weight tensor at layer `. This function can be approximately optimized with gradient descent by doing the operation: W ` ← (1 + β)W ` − βW `W `>W `. (8) Given that our network is smaller we can apply the optimization to the entirety of the W , instead of 30% as per the original paper, this increases the strength of the Parseval tightness. For the convolutional renormalization, each matrixW ` is reparametrized before being applied to the convolution as W ` √ 2ks+1 , where ks is the kernel size. For our architecture the inputs from a layer come from either one or two different layers. In the case where the inputs come from only one layer, α the convexity constraint parameter is set to 1. When the inputs come from the sum of two layers we use α = 0.5 as the value for both of them, which constraints our Lipschitz constant, this is softer than the convexity constraint from the original paper. B Hyperparameters We train our networks using classical stochastic gradient descent with momentum (0.9), with batch size of b = 100 images and using a L2-norm weight decay with a coefficient of λ = 0.0005. We do a 100 epoch training. Our learning rate starts at 0.1. After half of the training (50 epochs) the learning rate decreases to 0.001. We use the mean of the difference of smoothness between successive layers in our loss function. Therefore in our loss function we have: L = CategoricalCrossEntropy + λWeightDecay + γ∆ (9) where ∆ = 1d−1 ∑d `=1 |δ`σ|. We perform experiments using various powers of the Laplacian m = 1, 2, 3, in which case the scaling coefficient γ is put to the same power as the Laplacian. We tested multiple parameters of β, the Parseval tightness parameter, γ the weight for the smoothness difference cost and m the power of the Laplacian. We found that the best values for this specific architecture, dataset and training scheme were: β = 0.01, γ = 0.01,m = 2, k = b. C Depiction of the network Figure 7 depicts the network used on all experiments of sections 3 and 4. f = 64 is the filter size of the first layer of the network. Conv layers are 3x3 layers and are always preceded by batch normalization and relu (except for the first layer which receives just the input). The smoothness gaps are calculated after each ReLU. D Additional experiments Given suggestions from the reviewers, we performed additional experiments to further demonstrate the capabilities of the proposed regularizer. Due to the lack of space they could not be added to the main paper. We consider the effects of the regularizer when applied on another datasets. We also consider the effects of adding adversarial data augmentation methods while minimizing the amount of other influencing factors. We first look at the results when using the same architecture as for the CIFAR-10 dataset, which inevitably results in far from state-of-the-art accuracy on CIFAR-100. Then, we perform experiments using a different architecture (namely WideResnet 28-10, with dropout) for CIFAR-100. D.1 CIFAR-10 We add two types of tests for the CIFAR-10 dataset: adversarial data augmentation during training and black-box FGSM. D.1.1 Tests with FGSM adversarial data augmentation In this section we consider tests adding adversarial data augmentation as suggested in (Kurakin et al., 2016). To be more precise we use the method they advise which is called "step1.1" using = 8255 . The results presented in the figures below are obtained by running 10 experiments with random initializations. We first perform the same tests as in Section 4. As expected, we observe in Figure 8 that training with adversarial examples help in the case of Gaussian noise, as it adds more variation to the training set, while reducing the accuracy on the clean set. Note that combining our method with adversarial training results in the best median accuracy. Combining the three methods is less successful than expected, which could indicate that a better hyperparameter search would be needed. Considering adversarial robustness, the obtained results are depicted in Figure 9. We observe that adding FGSM adversarial training does not generalize well to other types of attack (which is readily seen in the literature Madry et al. (2018)). Overall, the models using the proposed regularizer are the most robust. Finally, when considering implementation related perturbations, the results depicted in Figure 10 are consistent with the ones from Section 4.3, in which is shown that the proposed regularizer helps improving robustness. In summary, even when adding adversarial training, the proposed regularizer is either the most robust in median, or capable of improving the robustness when used combined with the other methods. D.1.2 Tests with black box FGSM To further verify that the obtained results are not only due to gradient masking, we perform tests with black box FGSM, where the target attacked network is not the same as the source of the adversarial noise. For this test we set the SNR of FGSM to 33. We chose the network with the best performance for each of the tested methods. The results are depicted in Table 1. In our experiments, we found that the combination of our method with Parseval is the most robust to noise coming from other sources, while the noise created by both Parseval and our method did not generalize as well as the one created by Vanilla. This demonstrates that the improvements are not caused by gradient masking, but are caused by the increased robustness of the proposed method and Parseval’s. D.1.3 Tests with PGD adversarial data augmentation Most of our adversarial tests are performed with FGSM because of its simplicity and speed, even though it has already been shown (e.g: Madry et al. (2018)) that FGSM is weak as an attack and as a defense mechanism. Despite the fact we do not only target adversarial defense, we further stress the ability of the proposed regularizer to improve it and to combine with other methods. To this end we perform experiments against the PGD (Projected Gradient Descent) attack. PGD is an iterative version of FGSM, which run for a maximum number of iterations it or until convergence. For each iteration it moves by a distance of step in the direction of the gradient provided it does not go at a distance greater than from the original image. Our experiments show that the proposed regularizer increases robustness against a weak PGD attack (similar epsilon as our FGSM with SNR=33), but it is almost completely defeated by the PGD with the parameters from (Madry et al., 2018). The results are depicted in table 2. We also show that, as expected, FGSM training does not add significant robustness against the stronger PGD attack. As the proposed regularizer can be combined with FGSM defense, it is natural to also test it alongside PGD training. We use the parameters advised in (Madry et al., 2018): 7 iterations with step = 2/255, and = 8/255. The results depicted in Table 3 show that using our regularizer increases robustness of networks trained with PGD. Note that Dropout and Gaussian Noise were applied ten times to each of the networks and the results are displayed as the mean test set accuracy under these perturbations. A rate of 40% was used for dropout. The PGD attack uses the following parameters: it = 20, step = 2255 , = 8 255 . We test the generality of the method using the CIFAR-100 dataset. Results are shown in Table 4 as the mean over three different initializations. Dropout and Gaussian Noise are applied ten times to each of the networks for a total of 30 different runs. An SNR of 33 is used for FGSM, and a rate of 25% is used for dropout. Images are normalized in the same way as the experiments with CIFAR-10. Due to time constraints we sample only 110 of the images from the test set for the Deep Fool test. The proposed regularizer is the most robust on all categories, while Parseval has problems with the perturbations, despite yielding the best accuracy on the clean test set. The combination of the proposed regularizer and the parseval training method is not able to reproduce the good results from the CIFAR-10 dataset. The results shown in Table 4 are obtained using an architecture that is not performing very well on the clean test set for the CIFAR-100 dataset. We thus performed additional experiments using the WideResNet 28-10 (Zagoruyko and Komodakis, 2016) architecture, and we added standard data augmentation (random crops and random horizontal flipping) and dropout with probability of 30% after the first convolution of each residual block. We train for 200 epochs, starting with a learning rate of 0.1 and divide the learning rate by 5 in epochs 60, 120 and 160. Momentum of 0.9 is used and weight decay of 5e-4. We use the value from the Parseval paper (β = 0.0003) as in this case it provided better results than the one described in Section B. Results on the WideResNet 28-10 architecture using data augmentation are shown in Table 5. We observe that the proposed method (sometimes with combinations with other methods) is still the most robust. E Impact of the proposed regularizer on the boundary We look at the impact of the proposed regularizer on the boundary region. To this end, we choose 10 pairs of points in distinct classes that are the most similar (i.e. their distance is minimal) in the input space and we look at the decision of the network function along the segment between them. The average is depicted in Figure 11. Note that the point to the left is always chosen to be the one corresponding to the decision of the network at the middle of the segment, so that the average curve is asymmetric. Interestingly, we observe that the proposed regularizer is the one for which the boundary is closest to the middle of the segments, thus proving our claim that the proposed regularizer control the boundary region. F Regularizer pseudo-code Below in Algorithm 1 we describe how we use the proposed regularizer to compute the loss as a pseudo-code. This function receives five inputs: 1. listactivations: the list of the intermediate features right after each call of the ReLU activation function of the network. We call these intermediate features activations` where ` represents the depth of the network; 2. y: the output of the network; 3. s: the label signal of the batch. Otherwise said, the ground truth labels of the examples of the batch; 4. m: the power of the Laplacian for which we wish to compute the smoothness; 5. γ: the scaling coefficient of the regularizer loss. Algorithm 1: Loss function of the regularized network 1: procedure Smoothness(activations`, s,m) 2: A` ← Pairwise cosine similarity of activations` 3: D` ← Diagonal degree matrix of A` 4: L` ← D` −A` 5: σ` ← Trace(sᵀ(L`)ms) 6: return σ` 7: procedure Loss(listactivations,y, s,m, γ) 8: for activations` ∈ listactivations do σ` ← Smoothness(activations`, s,m) 9: ∆← ∑`max i=1 |σ i−σi−1| `max−1 10: return CategoricalCrossEntropy(s,y) + γm∆
1. What is the focus and contribution of the paper regarding neural network robustness? 2. What are the strengths of the proposed approach, particularly in its experimental results? 3. What are the weaknesses of the paper, specifically regarding its significance and empirical proofs? 4. How does the reviewer assess the clarity and interpretability of the paper's content, especially in Section 3.2? 5. Are there any concerns or suggestions regarding the comparisons with other methods, such as adversarial training, and the inclusion of other benchmarks?
Review
Review To improve the robustness of neural networks under various conditions, this paper proposes a new regularizer defined on the graph of the training examples, which penalizes the large similarities between representations belonging to different classes, thus increase the stability of the transformations defined by each layer of the network. The paper is overall well written, and the idea involving the Laplacian of the similarity graph is interesting. I have reviewed this paper before. Compared to the previous version, this paper made a good improvement in its experimental results, by adding two different robustness settings in section 4.1 and section 4.3, and also include DeepFool as a strong attack method for testing adversarial robustness. However, my main concern about the paper is still about its significance. 1. It is still not clear why would this regularization help robustness especially when considering adversarial examples. Example 1 seems not obvious to me why maintaining the boundary margin (rather than expanding or shrinking) is preferred. As stated in the second paragraph in section 3.4, “lower value of \sigma^\ell(s) are indicative of better separation between classes”, what is the reason of not directly penalizing this value, rather than requesting a “stability” property on this value? How is this stability related to the robustness? This would request a deeper analysis and more empirical proofs in the paper. 2. Experimental results still seem not convincing to me. On one hand, based on the reported result, I am not very convincing that the proposed method outperforms Parseval, especially when considering the inconsistent behaviour of “Proposed + Parseval”. On the other hand, for adversarial robustness, the authors should have compared to the method of adversarial training as well. Beyond that, the authors should also be careful of the gradient masking effect of the proposed method. I am not sure if there is some other obvious benchmarks should be included for the other two robustness settings. Other comments: 1. Descriptions in the last 3 paragraphs in section 3.2 are not very clear. It always took me a while to figure it out every time I read the paper. It would be very helpful if the computation process and the discussions can be separated here, maybe with a pseudo-code for computing the regularizer. 2. On the other hand, while the proposed regularizer can be interpreted in a perspective of the Laplacian of the similarity graph, the third part in Equation (4), that expresses the smoothness as the sum of similarities between different classes, seems more intuitive to me. Emphasizing in this interpretation may also help convey the message.
ICLR
Title Quasi-Taylor Samplers for Diffusion Generative Models based on Ideal Derivatives Abstract Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed. This paper proposes new efficient samplers based on the numerical schemes derived by the familiar Taylor expansion, which directly solves the ODE/SDE of interest. In general, it is not easy to compute the derivatives that are required in higher-order Taylor schemes, but in the case of diffusion models, this difficulty is alleviated by the trick that the authors call “ideal derivative substitution,” in which the higher-order derivatives are replaced by tractable ones. To derive ideal derivatives, the authors argue the “single point approximation,” in which the true score function is approximated by a conditional one, holds in many cases, and considered the derivatives of this approximation. Applying thus obtained new quasi-Taylor samplers to image generation tasks, the authors experimentally confirmed that the proposed samplers could synthesize plausible images in small number of NFEs, and that the performance was better or at the same level as DDIM and Runge-Kutta methods. The paper also argues the relevance of the proposed samplers to the existing ones mentioned above. 1 INTRODUCTION Generative modeling based on deep neural networks is an important research subject for both fundamental and applied purposes, and has been a major trend in machine learning studies for several years. To date, various types of neural generative models have been studied including GANs (Goodfellow et al., 2014), VAEs (Kingma et al., 2021; Kingma & Welling, 2019), normalizing flows (Rezende & Mohamed, 2015), and autoregressive models (van den Oord et al., 2016b;a). In addition to these popular models, a class of novel generative models based on the idea of iteratively refinement using the diffusion process has been rapidly gaining attention recently as a challenger that rivals the classics above (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Song & Ermon, 2020; Ho et al., 2020; Dhariwal & Nichol, 2021). The diffusion-based generative models have recently been showing impressive results in many fields including image (Ho et al., 2020; Vahdat et al., 2021; Saharia et al., 2021; Ho et al., 2021; Sasaki et al., 2021), video (Ho et al., 2022), text-to-image (Nichol et al., 2021; Ramesh et al., 2022), speech (Chen et al., 2020; 2021; Kong et al., 2021; Popov et al., 2021; Kameoka et al., 2020), symbolic music (Mittal et al., 2021), natural language (Hoogeboom et al., 2021; Austin et al., 2021), chemoinformatics (Xu et al., 2022), etc. However, while the diffusion models have good synthesis quality, it has been said that they have a fatal drawback that they often require a very large number of iterations (refinement steps) during synthesis, ranging from hundreds to a thousand. In particular, the increase in refinement steps critically reduces the synthesis speed, as each step involves at least one neural function evaluation (NFE). Therefore, it has been a common research question how to establish a systematic method to stably generate good data from diffusion models in a relatively small number of refinement steps, or NFEs in particular. From this motivation, there have already been some studies aiming at reducing the NFEs (See § 2). Among these, Probability Flow ODE (PF-ODE) (Song et al., 2020b) enable efficient and deterministic sampling, and is gaining attention. This framework has the merit of deriving a simple ODE by a straightforward conceptual manipulation of diffusion process. However, the ODE is eventually solved by using a black-box Runge-Kutta solver in the original paper, which requires several NFEs per step and is clearly costly. Another PF-ODE solver includes DDIM (Song et al., 2020a), and is also commonly used. It is certainly efficient and can generate plausible images. However, it was not originally formulated as a PF-ODE solver, and the relationship between DDIM and PF-ODE is not straightforward. From these motivations, we provide another sampler to solve the same ODE, which performs better than or on par with DDIM. The derivation outline is simple and intuitive: (1) consider the Taylor expansion of the given system, and (2) replace the derivatives in the Taylor series with appropriate functions; that’s all. The contribution of this paper would be as follows: (1) We propose novel samplers for diffusion models based on Taylor expansion of PF-ODE. They outperformed, or were on par with RungeKutta methods. (2) To derive our algorithms, we show that the derivatives of score function can be approximated by simple functions. We call this technique the ideal derivative substitution. (3) It has been known that the 1st order term of DDIM is same as the Euler method for PF-ODE. This paper gives further explanation for higher order terms of DDIM: we show that the proposed Quasi-Taylor method and DDIM are identical at least up to 3rd order terms. (4) The same idea can be naturally extended to derive a stochastic solver for a reverse-time SDE, which we call R-SDE in this paper. 2 BACKGROUND AND RELATED WORK Diffusion Process to draw a new data from a target density: Let us first briefly summarize the framework of the diffusion-based generative models. Following Song et al. (2020b), we describe the mechanisms using the language of continuous-time diffusion process for later convenience. Let us consider “particles” {xt} moving in a d-dim space obeying the following Itô diffusion, SDE: dxt = f(xt, t)dt+ g(xt, t)dBt, (1) where Bt is the d-dim Brownian motion whose temporal increments obeys the standard Gaussian. The drift f(·, ·) is d-dim vector, and the diffusion coefficient g(·, ·) is scalar. The SDE describes the microscopic dynamics of each particle. On the other hand, the “population” of the particles obeying the above SDE, i.e. density function p(xt, t | xs, s), (t > s), follows the following PDEs, which are known as Kolmogorov’s forward and backward equations (KFE and KBE); the former is also known as the Fokker-Planck equation (FPE), see § E.2, FPE: ∂tp(xt, t | xs, s) = −∇xt · f(xt, t)p(xt, t | xs, s) + ∆xt g(xt, t) 2 2 p(xt, t | xs, s), (2) KBE: −∂sp(xt, t | xs, s) = f(xs, s) · ∇xsp(xt, t | xs, s) + g(xs, s) 2 2 ∆xsp(xt, t | xs, s), (3) where ∆x := ∇x ·∇x is Laplacian. (FPE also holds for p(xt, t); consider the expectation Ep(xs,s)[·].) These PDEs enables us to understand the macroscopic behavior of the particle ensemble. For example, if f(x, t) = −∇U(x), g(x, t) = √ 2D, where U(x) a certain potential and D a constant, then we may verify that the stationary solution of FPE is p(x) ∝ e−U(x)/D. It means that we may draw a sample x that follows the stationary density by evolving the SDE over time. This technique is often referred to as the Langevin Monte Carlo method (Rossky et al., 1978; Roberts & Tweedie, 1996). Some of the diffusion generative models are based on this framework, e.g. (Song & Ermon, 2019; 2020), in which the potential gradient∇U(x) is approximated by a neural network. Another systematic approach is considering the reverse-time dynamics (Song et al., 2020b). An approach is based on KBE eq. (3). Roughly speaking, FPE gives information about the future from the initial density, while KBE gives information about what the past states were likely to be from the terminal density. Here, instead of using KBE directly, it is useful to consider a variant of it which is transformed into the form of FPE, because it has an associated SDE that enables the particle-wise backward sampling (Stratonovich, 1965; Anderson, 1982); see also § E.3.2, R-FPE: −∂sp(xs, s | xt, t) = ∇xs · f̄(xs, s)p(xs, s | xt, t) + ∆xs ḡ(xs, s) 2 2 p(xs, s | xt, t) (4) R-SDE: dxs = −f̄(xs, s)(−ds) + ḡ(xs, s)dB̄s. (5) Hereafter, let g(xt, t) = g(t) for simplicity. Then the specific forms of drift and diffusion coefficients are written as follows, R-SDE coeffs: f̄(xt, t) = f̄](xt, t) := f(xt, t)− g(t)2∇xt log p(xt, t), ḡ(t) = g(t). (6) Starting from a certain random variable xT , then by evolving the R-SDE reverse in time, we may obtain a x̂0 which follows p(x0, 0 | xT , T ) (i.e. the solution of R-FPE eq. (4)). Therefore, if the initial density p(x0, 0) of the forward dynamics eq. (2) is the true density, then we may utilize this mechanism as a generative model to draw a new sample x̂0 from it. Another approach is based on FPE eq. (2). By formally eliminating the diffusion term of the FPE for the forward process, we can derive another backward FPE (see also § E.3.1). Being diffusionfree, the backward FPE yields a deterministic ODE, which is called the Probability Flow ODE (PF-ODE) (Song et al., 2020b), and is an example of neural ODEs (Chen et al., 2018). The population density obtained by evolving this system is exactly the same as the above R-SDE. PF-ODE coeffs: f̄(xt, t) = f̄[(xt, t) := f(xt, t)− 1 2 g(t)2∇xt log p(xt, t). ḡ(t) = 0. (7) Some extensions of this framework include as follows. Dockhorn et al. (2021) introduced the velocity variable considering the Hamiltonian dynamics. Another extension is the introduction of a conditioning parameter, and guidance techniques using it (Dhariwal & Nichol, 2021; Ho & Salimans, 2021; Choi et al., 2021) to promote the dynamics to go to a specific class of images, which has achieved remarkable results in text-to-image tasks (Nichol et al., 2021; Ramesh et al., 2022). Variance-Preserving Model (VP-SDE Model): The solution of unconditioned FPE is written as the convolution with the initial density p(x0, 0) and the fundamental solution, or the heat kernel, p(xt, t | x0, 0), which is the solution of the conditional FPE under the assumption that the initial density was delta function, p(x0, 0) = δ(x0−x∗0). Although it is still intractable to solve this problem in general, a well-known exception is the (time-dependent) Ornstein-Uhlenbeck (OU) process where f(xt, t) = − 12βtxt and g(xt, t) = √ βt. βt = β(t) is a non-negative continuous function. The specific form of diffusion coefficient βt has some options: a simplest one would be the linear function, and another would be the cosine schedule proposed in (Nichol & Dhariwal, 2021); see also § D. In any cases, if it is the OU process, the heat kernel is simply written as follows, p(xt, t | x0, 0) = N (xt | √ 1− σ2t x0, σ2t I), where σ2t = 1− exp ( − ∫ t 0 βt′dt ′ ) . (8) Hereafter, we denote the noise variance by νt := σ2t . (In some literature, the signal level αt :=√ 1− σ2t is used as a basic parameter instead of the variance.) This model is referred to as the variance-preserving (VP) model by Song et al. (2020b). It has good properties such as the scale of data ‖xt‖2 is almost homogeneous, which is advantageous in neural models. However, the variance exploding (VE) model (Song et al., 2020b) in which the norm increases is also practicable, and the theory can be developed in a similar manner. Training Objective: In diffusion-based generative models, one estimates the score function ∇xt log p(xt, t) = ∇xt logEp(x0,0)[p(xt, t | x0, 0)] by a neural network Sθ(xt, t). This sort of learning has been referred to as the score matching (Hyvärinen & Dayan, 2005; Vincent, 2011). However, the exact evaluation of this training target is clearly intractable because of the expectation Ep(x0,0)[·], so it has been common to consider a Variational Bayesian surrogate loss; Ho & Salimans (2021) showed that the following loss function approximates the negative ELBO, L := E[‖−√νt∇xt log p(xt, t | x0, 0)− Sθ(xt, t)‖22] = E[‖xt− √ 1−νtx0√ νt − Sθ(xt, t)‖22] (9) = E[‖w − Sθ( √ 1− νtx0 + √ νtw, t)‖22], (10) where the expectation in eq. (10) is taken w.r.t. x0 ∼ D, w ∼ N (0, I), and t ∼ Uniform([0, T ]). Some variants of the score matching objectives are also studied. For example, Chen et al. (2020) reported that the L1 loss gave better results than the L2 loss in speech synthesis. Also, Kingma et al. (2021) argued that the weighted loss with SNR-based weights improves the performance. It should be noted that the above loss function will actually be very close to the ideal score matching loss function in practice, where the probability is not conditioned on x0, i.e., Lideal = E[‖− √ νt∇xt log p(xt, t)− Sθ(xt, t)‖22]. (11) This is because there almost always exists a point x0 on the data manifold such that∇xt log p(xt, t) ≈ ∇xt log p(xt, t | x0, 0) holds with very high accuracy in very high-dim cases, because of the wellknown “log-sum-exp ≈ max” law. For more details, see § 3.3 and § A. Sampling Schemes for R-SDE and PF-ODE: Thus obtained Sθ(xt, t) is expected to finely approximate −√νt∇xt log p(xt, t), and we may use it in eq. (5). One of the simplest numerical schemes for solving SDEs is the Euler-Maruyama method (Maruyama, 1955, Theorem. 1) as follows, and many diffusion generative models are actually using it. Euler-Maruyama: xt−h ← xt − hf̄](xt, t) + √ hg(t)w, where w ∼ N (0, I) (12) where h > 0 is the step size. The error of the Euler-Maruyama method is the order of O( √ h) in general, though it is actually O(h) in our case; this is because ∇xtg(t) = 0. As a better solver for the R-SDE, the Predictor-Corrector (PC)-based sampler was proposed in (Song et al., 2020b). The PC sampler outperformed the Predictor-only strategy, but it requires many NFEs in the correction process, so we will exclude it in our discussion. Another R-SDE solver is the one proposed by Jolicoeur-Martineau et al. (2021), whose NFE per refinement step is 2. On the other hand, there are also deterministic samplers for PF-ODE eqs. (5), (7) as follows, Euler: xt−h ← xt − hf̄[(xt, t) (13) Runge-Kutta: xt−h ← xt − h ∑m i=1 biki, where ki = f̄[(xt − h ∑i−1 j=1 aijkj , t− hci) (14) where {aij}, {bi}, {ci} are coefficients of the Runge-Kutta (RK) method (see § E.5). The error of the Euler method is O(h), and that of the RK method is O(hp), p ≤ m in general (Press et al., 2007, § 16). Another deterministic sampler is DDIM (Song et al., 2020a, Eq. (13)), and is also understood as a PF-ODE solver (Salimans & Ho, 2022). Its NFE per step is only 1, and is capable of efficiently generate samples. DDIM: xt−h ← αt−hαt xt + ( σt−h − αt−hαt σt ) Sθ(xt, t). (15) In addition, as a concurrent work as ours, Lu et al. (2022) proposed the DPM-solver, which is based on the Taylor expansion of PF-ODE. However, as the gradient is evaluated using several different points, the NFE per step is greater than 1 in general. Liu et al. (2022) proposed a sampler based on the linear multi-step method, in which the NFE/step is reduced to 1 except initial 3 steps. Another PF-ODE solver is the DEIS (Zhang & Chen, 2022) which is based on the exponential integrator with some non-trivial approximations such as the polynomial interpolation of score function. Other techniques that aimed to make sampling faster include as follows. Song & Ermon (2020) proposed a variety of techniques to accelerate the sampling. Watson et al. (2021) proposed a DP-based optimization method to tune noise schedules for faster sampling. Luhman & Luhman (2021) and Salimans & Ho (2022) proposed distilling the pretrained teacher model to a student model that can predict teacher’s several steps in a single step, which is efficient during the sampling but extra training for distillation is required. Bao et al. (2022a;b) derived some analytic expressions of reverse dynamics to enable faster sampling. 3 PROPOSED METHOD: QUASI-TAYLOR SAMPLERS 3.1 MOTIVATION: HIGHER-ORDER STRAIGHTFORWARD SOLVERS FOR R-SDE AND PF-ODE As mentioned above, DDIM already exists as an efficient solver for PF-ODE, but it can only be considered a PF-ODE solver up to first-order terms (Song et al., 2020a; Salimans & Ho, 2022), and it would not be clear enough whether it can be considered a higher-order solver for PF-ODE. Some other techniques (Lu et al., 2022; Liu et al., 2022; Zhang & Chen, 2022) were designed as higher-order PF-ODE solvers, though their derivations are rather sophisticated and less simple. Since PF-ODE and R-SDE provide the basis for the diffusion generative models, it would be beneficial to develop samplers that directly solve them through intuitive and straightforward arguments. From these motivations, we propose a simple but efficient sampler based on the Taylor expansion, a very basic technique that is familiar to many researchers and practitioners. In general, Taylor methods are not very popular as numerical schemes because they require higher-order derivatives, which are not always tractable. However, in diffusion models, the derivatives are easily and effectively evaluated, albeit approximately. The validity of this approximation requires some consideration (see § A, § B), but once accepted, an efficient sampler can be derived simply by substituting this approximation formula into the Taylor series. This section describes the details of the idea, and derives solvers for both PF-ODE and R-SDE. Entire sampling procedures are summarized in § F. 3.2 TAYLOR SCHEME FOR ODE AND ITÔ-TAYLOR SCHEME FOR SDE Taylor Scheme for Deterministic Systems For simplicity, we consider the 1-dim case here, but we can easily generalized it to multidimensional cases. (See § E.1.1.) Given a ODE ẋt = a(xt, t), where the function a is sufficiently smooth, then we can consider the Taylor expansion of it, using a differential operator L[ := ( ∂t + a(t, xt)∂xt ) . We can write the Taylor expansion of the path xt as follows. Ignoring o(hp) terms of the series, we obtain a numerical scheme of order p. xt+h = xt + ha(xt, t) + h2 2! L[a(xt, t) + h3 3! L2[a(xt, t) + · · · . (16) Itô-Taylor Scheme for Stochastic Systems In stochastic systems, the Taylor expansion requires modifications because of the relation E[dB2t ] = dt. If xt obeys a stochastic system dxt = a(xt, t)dt+ b(xt, t)dBt, then the path is written in a stochastic version of Taylor-like series, which is often called the Itô-Taylor expansion, a.k.a. Wagner-Platen expansion (Platen & Wagner, 1982);(Kloeden et al., 1994, § 2.3.B);(Särkkä & Solin, 2019, § 8.2). The Itô-Taylor expansion is based on the following differential operators L], G], which are based on Itô’s formula (Itô, 1944). L] := ∂t + a(x, t)∂x + 1 2 b(x, t)2∂2x, G] := b(x, t)∂x (17) In (Kloeden & Platen, 1992), a number of higher order numerical schemes for SDEs based on the Itô-Taylor expansion are presented. One of the simplest of them is as follows. See also § E.1.2. Theorem 1 (Kloeden & Platen (1992, § 14.2): An Itô-Taylor scheme of weak order β = 2). Let xt obeys the above SDE, and let the differential operators L], G] be given by eq. (17). Then, the following numerical scheme weakly converges with the order of β = 2 (see § E.4). Furthermore, in a special case where G2]b ≡ 0, the strong γ = 1.5 convergence is also guaranteed (Kloeden & Platen, 1992, § 10.4). xt+h ← xt + ha+ w̃tb+ w̃2t − h 2 G]b+ h2 2 L]a+ (w̃th− z̃t)L]b+ z̃tG]a (18) where w̃t = √ hwt, z̃t = h √ hzt are correlated Gaussian random variables, and wt, zt are given by wt = u1 and zt = 12u1 + 1 2 √ 3 u2, where u1, u2 ∼ N (0, 1) (i.i.d.). The notations a, L]a, etc. are the abbreviations for a(xt, t), (L]a)(xt, t), etc. 3.3 SINGLE POINT APPROXIMATION OF THE SCORE FUNCTION Before proceeding, let us introduce the single point approximation of score function that ∇xt log p(xt, t) almost certainly has a some point x0 on the data manifold such that the following approximation holds, ∇xt log p(xt, t) = ∇xt log ∫ p(xt, t | x0, 0)p(x0, 0)dx0 ≈ ∇xt log p(xt, t | x0, 0). (19) To date, this approximation has often been understood as a tractable variational surrogate. However, the error between the integral and the single point approximation is actually very small in practical scenarios. More specifically, the following facts can be shown under some assumptions. 1. The relative L2 distance between ∇xt log p(xt, t) and ∇xt log p(xt, t | x0, 0) is bounded above by √ (1− νt)/νt for any point x0 on the “data manifold” in practical scenarios. 2. When the noise level is low νt ≈ 0, and the data space is sufficiently high-dimensional, the distant points far from xt do not contribute to the integral. If the data manifold is locally a k-dim subspace of the entire d-dim data space, where 1 k d, then the relative L2 distance is bounded above by around 2 √ k/d. Of course, the single point approximation is not always valid. In fact, the approximation tends to break down when the noise level νt is around 0.9 (SNR = (1− νt)/νt is around 0.1). In this region, the single point approximation can deviates from the true gradient by about 20% in some cases. Conversely, however, it would be also said that the error is as small as this level even in the worst empirical cases. For more details on this approximation, see § A. 3.4 IDEAL DERIVATIVE SUBSTITUTION In order to adopt the above Taylor schemes to our problem setting where the base SDE is eq. (5), and f̄], f̄[ are given by eqs. (6), (7), we need to consider the following differential operators. Note that the time evolves backward in time in our case, the temporal derivative should be −∂t, L[ = −∂t − ( f̄[(xt, t) · ∇xt ) , L] = −∂t − ( f̄](xt, t) · ∇xt ) + βt 2 ∆xt , G] = √ βt (1 · ∇xt) , where f̄[(xt, t) = − βt 2 xt + βt 2 √ νt Sθ(xt, t), f̄](xt, t) = − βt 2 xt + βt√ νt Sθ(xt, t). (20) It is not easy in general to evaluate expressions involving such many derivatives. Indeed, for example, L[(−f̄[) has the derivatives of the learned score function, viz. ∂tSθ(xt, t) and (• · ∇xt)Sθ(xt, t), which are costly to evaluate exactly, whether in approaches based on finite differences (as in (Lu et al., 2022)), back-propagation, or the JAX paradigm (Bradbury et al., 2018), because they eventually require extra evaluation of a deeply nested function other than Sθ(xt, t), and extra memory consumption. Fortunately, however, by using the trick which the authors call the “ideal derivative substitution", we may write all of the derivatives as a simple combination of known values, only consisting of xt,Sθ(xt, t), νt, βt and derivatives of βt, and no extra computation is needed. Since the score function has a single point approximation eq. (19) we may assume that the derivatives should ideally hold following equalities. For derivation, see § B.1. Conjecture 1 (Ideal Derivatives). Under assumptions in § A — i.e. the data space Rd is sufficiently high dimensional d 1, the data manifoldM⊂ Rd is also sufficiently high dimensional but much smaller than the entire space (1 dimM d),M is bounded,M is sufficiently smooth locally, and the variance parameter νt is close to 0 or 1; — then it is likely that the following approximations hold, where a ∈ Rd is an arbitrary vector. We call them the “ideal derivatives”. (a · ∇xt)Sθ(xt, t) = 1√ νt a, −∂tSθ(xt, t) = − βt 2 √ νt ( xt − Sθ(xt, t)√ νt ) . (21) To confirm the accuracy of this approximation, we compared empirical and ideal derivatives using MNIST (LeCun et al., 2010) and CIFAR10 (Krizhevsky, 2009). As a result, it was confirmed that the approximation of spatial derivative, i.e. (a · ∇), is usually very accurate; the cosine similarity between the empirical and ideal derivatives is nearly always > 0.99 (Figure 10). On the other hand, for the time derivative ∂t, it was confirmed that it is quite accurate when the time parameter t (and the variance νt) are small, but the error increases when the time parameter t (and the variance νt) become larger (Figure 9). See § B.2 for more details. 3.5 QUASI-TAYLOR AND QUASI-ITÔ-TAYLOR SCHEMES WITH IDEAL DERIVATIVES As we can see in § B.2, the ideal derivative approximation is sometimes very accurate while sometimes not. In any case, however, the error in the ideal derivative only affects the second or higher order terms of Taylor series, and it will not be the dominant error in the whole. As there is an overall correlation between the true and ideal derivatives, the advantages will outweigh the disadvantages on average, and we can regularly use this approximation on a speculative basis, even though there exist some cases where the approximation is not accurate. If we accept the ideal derivative approximation, we can formally compute the symbolic expressions for the derivatives L[(−f̄[), L](−f̄]), L](g), G](−f̄]) and G](g) that appear in the Taylor and ItôTaylor series by routine calculations, which can be easily automated by computer algebra systems such as SymPy (Meurer et al., 2017) as shown in § B.3. By substituting thus obtained symbolic expressions into the above Taylor series, we can derive Taylor schemes for both PF-ODE and R-SDE as follows. Algorithm 1 (Quasi-Taylor Sampler with Ideal Derivatives for PF-ODE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ [ t,hxt + µ [ t,hSθ(xt, t)/ √ νt,where (22) ρ[t,h = 1 + βth 2 + h2 4 ( β2t 2 − β̇t ) + h 3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + · · · , (23) µ[t,h = −βth2 + h 2 4 ( β̇t − β 2 t 2νt ) + h 3 4 ( β3t (−ν 2 t+3νt−3) 12ν2t + βtβ̇t2νt − β̈t 3 ) + · · · . (24) Using terms up to O(h2), the sampler will have 2nd-order convergence (henceforth referred to as Taylor 2nd), and using terms up to O(h3), the sampler will 3rd-order convergent (similarly, Taylor 3rd). If we use up to the O(h) terms, the algorithm is same as the Euler method. Algorithm 2 (Quasi-Itô-Taylor Sampler with Ideal Derivatives for R-SDE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ ] t,hxt + µ ] t,hSθ(xt, t)/ √ νt + n ] t,h,where (25) ρ]t,h = 1 + βt 2 h+ h2 4 ( β2t 2 − β̇t ) , µ]t,h = −βth+ β̇th 2 2 , (26) n]t,h = √ βt √ hwt + h 3/2 ( − β̇t 2 √ βt (wt − zt) + β 3/2 t (νt−2) 2νt zτ ) . (27) The Gaussian variables wt and zt have dimension-wise correlations, and each dimension is sampled similarly to Theorem 1. Computation Cost: At first glance, these algorithms may appear to be very complex. However, the computational complexity hardly increases compared to the Euler or Euler-Maruyama methods, because almost all of the computational cost is accounted for by the neural network Sθ(xt, t), and the costs for scalar values ρ•t,h, µ • t,h and noise generation n ] t,h are almost negligible. It should also be noted that these scalar values can be pre-computed and stored in the memory before synthesis. Thus the computational complexity of these methods are practically equal to Euler, Euler-Maruyama, and DDIM methods. Error from the Exact Solution of PF-ODE: The numerical error of the Quasi-Taylor method from the exact solution increases depending on the following factors: (1) The truncation error of the Taylor series in each step, i.e. O(hp+1), (2) The number of the steps i.e. O(1/h), (3) The training and generalization error of the score function, i.e. ≈ L, and (4) The average error between the true and ideal derivatives of the score function =: ‖δ‖. If the factors 3 and 4 could be zero, then the numerical error is the order of O(hp). Otherwise, the expected numerical error is roughly evaluated as follows, error = O ( h−1(hL+ h2(L+ ‖δ‖) + h3(L+ ‖δ‖) + · · ·+ hp+1) ) = O ( L+ h(L+ ‖δ‖) + h2(L+ ‖δ‖) + · · ·+ hp ) . (28) That is, the error of Euler method is O(L+ h), the Heun method (2nd order Runge-Kutta) will be O(L+hL+h2), and the Taylor-2nd method is O(L+h(L+‖δ‖)+h2). As long as L, ‖δ‖ > 0, the predominant O(h) term will not disappear. Therefore, the overall order of the error will not decrease even if we increase the order of Taylor series greater than p ≥ 3. Nevertheless, beyond such an order evaluation, specific coefficients in higher order terms can still affect the performance, which should be validated empirically. 4 IMAGE SYNTHESIS EXPERIMENT Experimental Configuration: In this section, we conduct experiments to verify the effectiveness of the methods developed in this paper. Specifically, we compare the performance of the Euler scheme eq. (13), Taylor 2nd & Taylor 3rd (Alg. 1), DDIM (Song et al., 2020a), and the Runge Kutta methods (Heun and RK4 § E.5; these are less efficient than others because of NFEs per step) for PF-ODE, as well as the Euler-Maruyama scheme eq. (12) and Itô-Taylor (Alg. 2) for R-SDE. The datasets we used were CIFAR-10 (32× 32) (Krizhevsky, 2009) and CelebA (64× 64) (Liu et al., 2015). The network structure was not novel but was based on an existing open source implementation; we used the “NCSN++” implemented in the official PyTorch code by Song et al. (2020b). The network consisted of 4 levels of resolution, with the feature dimension of each level being 128 → 128 → 256→ 256→ 256. Each level consisted of BigGAN-type ResBlocks, and the number of ResBlocks in each level was 8 (CIFAR-10) and 4 (CelebA). The loss function we used was the unweighted L2 loss similarly to (Ho et al., 2020). The optimizer was Adam (Kingma & Ba, 2014). The machine used for training was an in-house Linux server dedicated to medium-scale machine learning training with four GPUs (NVIDIA Tesla V100). The batch size was 256. The number of training steps was 0.1 M steps, and the training took about a day for each dataset. The noising schedule was also the same as the existing one, the default configuration of VP-SDE (Song et al., 2020b): βt = 0.1 + 19.9t and νt = 1− exp(−0.1t−9.95t2) eq. (76). The integration duration was T = 1, and the step size h was constant, i.e. h = T/N where N is the number of refinement steps. As a quality assessment metric, we used the Fréchet Inception Distance (FID) (Heusel et al., 2017). To evaluate FIDs, we used the pretrained Inception v3 checkpoint (Szegedy et al., 2016), and resized all images to 299× 299× 3 by bilinear interpolation before feeding them to the Inception network. For each condition, 10,000 images were randomly generated to compute the FID score. Note that in this experiment, the computational resources for training were limited, and training was stopped before it fully converged (only 0.1 M steps, while in some other papers the number of training steps was e.g. 1.3 M steps in (Song et al., 2020b)). Therefore, it would be necessary to observe relative comparisons between samplers rather than directly comparing these FID value to those presented in other papers. Results: Figure 1 and Figure 2 show random samples for each sampler. More examples are available in § G. The deterministic samplers considered in this paper generated plausible images much faster than the vanilla Euler-Maruyama sampler. Figure 3a and Figure 3b reports the FID scores. From these figures, the following observations can be made. First, the proposed Quasi-Taylor methods have about the same or slightly better than DDIM. The reason for this is discussed in the next section § 5. We also found that the Runge-Kutta methods reduces FID in fewer steps overall. However, they also hit bottom faster. This may be due to the effect of the singularity at the time origin (see § D) in the final step. (This can be seen in Figure 16. In the second right column, the Runge-Kutta methods produce images similar to the other deterministic samplers, but the rightmost ones seem to be slightly noisier than the others). Even though the ideal derivatives are only approximations and contain some errors, the convergence destinations of Quasi-Taylor methods were almost the same as the Runge-Kutta methods. This suggests that the error in the ideal derivatives is actually hardly a problem, because in regions where the approximation error is large, the state xt is noisy to begin with (e.g. left 2/3 figures in Figure 16), and the approximation error is negligible compared to the noise that was originally there. The proposed stochastic sampler (Itô-Taylor) also showed sufficiently competitive results, in terms of both FID scores and visual impression. Comparison of the figures in § G (e.g. Figure 21) confirms that the Itô-Taylor method empirically reaches almost the same target as Euler-Maruyama method much more accurately, and it could be expected to be a safe alternative to Euler-Maruyama method when stochastic sampling is important. 5 DISCUSSION: RELATIONSHIP WITH DDIM In the above experiment, the performance of the proposed Quasi-Taylor methods are found to be almost equivalent to that of DDIM. In fact, despite having distinctly different derivation logics, the proposed method and DDIM actually agree, at least up to the 3rd order terms of h. Therefore, it is not surprising the results are similar; and the smaller h is, the closer the results are. This can be quickly verified by doing a Taylor expansion of the coefficients of eq. (15), i.e., αt−hαt and (σt−h − αt−h αt σt), w.r.t. h. Although it is tedious to perform this calculation by hand, the computer algebra systems e.g. SymPy immediately calculate it. For this computation, see § C. This finding that truncating DDIM at the 2nd or 3rd order of h yields exactly the same algorithms as the proposed Quasi-Taylor methods may be a useful insight for DDIM users, even if it does not lead them to switch the regular sampler from DDIM to Quasi-Taylor. That is, it offers an option of truncating the higher-order terms of DDIM. 6 CONCLUDING REMARKS This paper proposed a Taylor-expansion approach for diffusion generative models, particularly the Probability Flow ODE (PF-ODE) and the reverse-time SDE (R-SDE) solvers. The assumptions to derive our sampler were minimalistic, and the derivation process was straightforward. We just substituted the derivatives in the Taylor series by ideal ones. The obtained Quasi-Taylor and Quasi-Itô-Taylor samplers performed better than or on par with DDIM and Runge-Kutta methods. This fact implicitly supports the validity of our approximations. Conversely, if we could find some examples where the Quasi-Taylor methods, DDIM and RK methods gave decisively different results, we might be able to gain a deeper understanding of the structure of data manifold and the fundamentals of diffusion models by investigating the causes of discrepancy. Reproducibility Statement Pseudocodes of the proposed methods are available in § F, and the derivation of the proposed method is described in § B.1, § B.3. The experiment is based on open source code with minimal modifications to match the proposed method, and all the data used in this paper are publicly available. Experimental conditions are elaborated in § 4. Ethics Statement As a final note, negative aspects of generative models are generally pointed out, such as the risk of reproducing bias and discrimination in training data and the risk of being misused for deep fakes. Since this method only provides a solution to existing generative models, it does not take special measures against these problems. Maximum ethical care should be taken in the practical application of this method. A.3 COMPARISON OF THE EMPIRICAL SCORE FUNCTION AND THE SINGLE POINT APPROXIMATION Let us empirically validate the accuracy of single point approximation using real data as follows, • D = {MNIST (LeCun et al., 2010) 60,000 samples}, • D = {CIFAR-10 (Krizhevsky, 2009) 50,000 samples}. Since the true score function cannot be determined without knowing the true density (which will be possible with synthetic data, but discussing such data will not be very interesting here), the empirical score function was calculated using the real data D above as follows, True Score = ∇ log p(xt, t) = Ep(x0)[q(x0 | xt)∇ log p(xt, t | x0, 0)] ≈ 1|D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)] =: Empirical Score. (45) The evaluation of empirical score function using the entire dataset is unrealistic if the dataset D is large, but it is feasible if D is a small dataset like MNIST and CIFAR-10. In order to evaluate the accuracy of single point approximation, we evaluated following three metrics. • Relative L2 error between the empirical score function and∇ log p(xt, t | x0, 0), • Cosine similarity between the empirical score function and∇ log p(xt, t | x0, 0), • Entropy of q(x0 | xt). Figure 6 shows the relative L2 distance, for both datasets. Figure 7 similarly show the distribution (random 10,000 trials) of the cosine similarity, and Figure 8 shows the entropy. Dashed curves indicate the bounds evaluated in eq. (31) and eq. (32). These figures show that the range of intermediate region between Phase (1) and Phase (2) will not have impact in practical situations since we do not evaluate the neural network Sθ(·, ·) in this range so many times (i.e., ᾱt ∼ 10−3 to 10−1 ⇔ νt ∼ 0.999 to 0.9). Moreover, the approximation accuracy is still very high even in this region. Furthermore, although MNIST and CIFAR-10 are quite “low-dimensional” for real-world images, approximations are established with such high accuracy. Therefore, it is expected to be established with higher accuracy for more realistic images. B ON THE IDEAL DERIVATIVE APPROXIMATION Thus, we can assume that the single point approximation almost always holds practically. −Sθ(xt, t)√ νt model≈ ∇xt log p(xt, t) almost equal≈ ∇xt log p(xt, t | x(i)0 , 0) = − xt − √ 1− νtx(i)0 νt . Therefore, we may also expect that the similar approximation will be valid for their derivatives. Of course, strictly speaking, such an expectation is mathematically incorrect. For example, let g(x) = f(x) + ε sinωx, then the difference g(x) − f(x) = ε sinωx goes to zero as ε → 0, but the difference of derivatives g′(x)− f ′(x) = εω cosωx does not if ω →∞ faster than 1/ε. If the error between them in the Fourier domain is written as E(ω) = G(ω) − F (ω), then the L2 error between the derivatives is ‖g′(x) − f ′(x)‖22 = ‖ωE(ω)‖22 × const (Parseval’s theorem). In other words, the single point approximation does not necessarily imply the ideal derivative approximation. If it is to be mathematically rigorous, it must be supported by other nontrivial knowledge on the data manifold. This nontrivial leap is the most important “conjecture” made in this paper and its theoretical background should be more closely evaluated in the future. B.1 DERIVATION OF THE “IDEAL DERIVATIVES” Because of the discussion in § A, the true score function ∇xt log p(xt, t) is finely approximated by a single point approximation ∇xt log p(xt, t | x0, 0). Now we may also assume that the derivatives of both will also be close. In this paper, we are interested in the Taylor expansion of the following form (see also § E.1.1), ψ(xh, h) = ψ(x0, 0) + ∞∑ k=1 hk k! (∂t + a(xt, t) · ∇xt)k ψ(xt, t) ∣∣∣∣ t=0 . (46) If the function ψ(xt, t) is separable in each dimension (i.e., ∂xiψj = 0 for i 6= j), the following relation holds, (a(xt, t) · ∇xt)ψ(xt, t) = a(xt, t) ∇xt ψ(xt, t), (47) where is the element-wise product or operation. If a(xt, t) is also separable in each dimension4 the Taylor series is formally rewritten as follows, ψ(xt, t) = ψ(x0, 0) + ∞∑ k=1 tk k! ( 1∂t + a(xt, t) ∂xt )k ψ(xt, t) ∣∣∣∣ t=0 (48) where ∂xt := ∇xt is the element-wise derivative operator. This is formally the same as the 1-dim Taylor series. Therefore, it is sufficient to consider the 1-dim Taylor series first, and parallelize each dimension later. Thus the derivatives we actually need are the following two. ∂xtSθ(xt, t) = ∇xt Sθ(xt, t), ∂tSθ(xt, t) = (1∂t) Sθ(xt, t). (49) B.1.1 SPATIAL DERIVATIVE ∂xtSθ(xt, t) := ∇xt Sθ(xt, t) Let us first compute the spatial derivative of the conditional score function. (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = (∑ i ai∂xti ) xt − √ 1− νtx0√ νt 4In general, (a · ∇)2 = ( ∑ i ai∂i) 2 = ( ∑ i ai∂i)( ∑ j aj∂j) = ∑ i ai ∑ j(∂iaj + aj∂i∂j). If a is separable in each dimension, the ∂iaj(i 6= j) terms vanish, and (a · ∇)2 = ∑ i(ai∂iai + ∑ j aiaj∂i∂j). If the function ψ(xt, t) is separable in each dimension, then (a · ∇)2ψk = ∑ i(ai∂iai + ∑ j aiaj∂i∂j)ψk = (ak∂kak + a 2 k∂ 2 k)ψk. Thus we can formally write (a · ∇)2ψ = (a ∇ a + a a ∇ ∇) ψ = a (∇ a+ a ∇ ∇) ψ = a ∇ (a ∇) ψ = (a ∇ )2ψ = (a ∂x)2ψ. (Note that the operator (a · ∇) is scalar while (a ∂x) is d-dim vector.) We can similarly show (a · ∇)kψ = (a ∂x)kψ for k ≥ 3. = 1√ νt (∑ i ai∂xti ) (xt − √ 1− νtx0)1 ...(∑ i ai∂xti ) (xt − √ 1− νtx0)d = 1√ νt (∑ i ai∂xti ) (xt 1 −√1− νtx01) ...(∑ i ai∂xti ) (xt d −√1− νtx0d) = 1√ νt ( a1∂xt1 ) (xt 1 −√1− νtx01) ...( ad∂xtd ) (xt d −√1− νtx0d) = 1√ νt a1... ad = 1√ νt a = a 1√ νt 1. (50) Here, we used the notation xti to denotes the i-th component of a vector xt. Note that up to this point in the discussion, there have been no approximations, but strict ones. Now let us consider the approximation. Because of the single point approximation, we may assume that the derivative of the integrated score function will also be approximated by the derivative of the conditional score function, i.e., (a · ∇xt)(− √ νt∇xt log p(xt, t)) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)). (51) As the neural network Sθ(xt, t) is trained so that it approximates the integrated score function, we can also assume the following relation, (a · ∇xt)Sθ(xt, t) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = 1√ νt a. (52) Thus we have obtained the ideal spatial derivative of the neural network. We can also formally write the spatial derivative as follows using the above notation, a (∂xtSθ(xt, t)) = a 1√ νt 1. (53) We can also write it as ∂xtSθ(xt, t) = 1√ νt 1. (54) B.1.2 TIME DERIVATIVE −∂tSθ(xt, t) Next, let us compute −∂t(− √ νt∇xt log p(xt, t | x0, 0)). During the computation, x0 is replaced by the relation x0 = 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) . (55) We also use the following relations between νt, βt, which is immediately obtained from the definition of νt, ν̇t = (1− νt)βt. (56) Using the above information, we may compute the temporal derivative of the conditional score function as follows. − ∂t(− √ νt∇xt log p(xt, t | x0, 0)) = −∂t xt − √ 1− νtx0√ νt = − 1√ νt ( 1 2 ν̇t(1− νt)−1/2x0 ) − (xt − √ 1− νtx0) ( −1 2 ν̇tν −3/2 t ) = − ν̇t 2ν 3/2 t ( νt√ 1− νt x0 − (xt − √ 1− νtx0) ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt x0 ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) ) = − ν̇t 2ν 3/2 t (( −1 + 1 1− νt ) xt + 1 1− νt (νt∇xt log p(xt, t | x0, 0)) ) = − 1 2ν 3/2 t ν̇t 1− νt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − 1 2ν 3/2 t βt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) . (57) (Note that this calculation is exact, and no approximation is injected.) Because of the single point approximation, we may assume −∂t(− √ νt∇xt log p(xt, t)) ≈ −∂t(− √ νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) ≈ − βt 2 √ νt (xt +∇xt log p(xt, t)) , (58) and therefore, we can also assume that the temporal derivative of the neural network is approximated as −∂tSθ(xt, t) ≈ − βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) . (59) The “derivatives" have some good points. For example, the partial derivatives commute, ∂xt∂tSθ(xt, t) = ∂t∂xtSθ(xt, t). (60) B.2 COMPARISON OF THE EMPIRICAL SCORE DERIVATIVES AND IDEAL DERIVATIVES Let us empirically validate that idela approximation using real data similarly as above. However, since the equations will become very complicated if we evaluate the exact empirical score derivatives, we instead used finite differences as the ground truths. That is, let S(x, t) be the routine that computes the empirical score function as follows, S(x, t) = − √ νt |D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)], (61) and we evaluated the empirical score derivatives by the finite differences as follows5, Empirical t Deriv: ∂tS ≈ S(xt, t+ ε)− S(xt, t) ε (62) Empirical xt Deriv: (a · ∇xt)S ≈ S(xt + εa, t)− S(xt, t) ε , where a ∼ N (0, I). (63) where ε should be a sufficiently small value, and we used ε = 10−3 here. We compared these empirical derivatives with the ideal derivatives using MNIST and CIFAR-10. Ideal t Deriv: ∂tSθ = βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) = βt 2 √ νt ( xt − xt − √ 1− νtx0 νt ) Ideal xt Deriv: (a · ∇xt)Sθ = 1√ νt a As the ideal derivatives require the specific function forms of diffusion and variance schedules, we tested on following two noise schedules. Linear schedule We first tested on the linear schedule eq. (76), where β0 = 0.1 and β1 = 9.95. This is the same schedule as the one used in the main text. Figure 9 shows the relativeL2 error and the cosine similarity between the ideal t derivative eq. (21) and the empirical t derivative eq. (62), in which it is observed that they are very close when 0 / t / 0.5, while the approximation accuracy decreases as t increases. However, even in that case, there tends to be an overall positive correlation. It can also be observed that there is an error that seems to originate from the singularity of time origin when t ≈ 0. (See also § D.2.) For the x derivative (Figure 9), on the other hand, we can confirm that the errors between the ideal x derivative eq. (21) and empirical x derivative eq. (62) are generally very highly correlated, except around t ≈ 0.5. Modified tanh schedule We also tested on another noise schedule, the modified tanh schedule eq. (79) which does not have the singularity at the time origin. The parameters A, k were determined so that ν0 = 0.001 and ν1 = 0.999. Figure 11 and Figure 12 show the results. In this case, the overall trend is similar to the linear schedule, but we can observe that the singularity of the time origin of the t derivative is eliminated. 5To verify the empirical xt derivative, let us consider a simple case of three-variable function f(x, y, z). As its total derivative is df = ∂xfdx + ∂yfdy + ∂zfdz, we have f(x + a, y + b, z + c) − f(x, y, z) = (a∂x + b∂y + c∂z)f(x, y, z) for small a, b, c. Let a = εa′, b = εb′ and c = εc′, then f(x + εa′, y + εb′, z + εc′)− f(x, y, z) = ε(a′∂x + b′∂y + c′∂z)f(x, y, z). Therefore, we can write the spatial derivative as (a′∂x + b ′∂y + c ′∂z)f(x, y, z) = limε→0 1 ε (f(x+ εa′, y + εb′, z + εc′)− f(x, y, z)). B.3 THE DERIVATIVES L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) The computation of the derivative L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) does not require any particular nontrivial process. All we have to do is rewrite a term every time we encounter a derivative of Sθ(xt, t) or νt, and the rest is at the level of elementary exercises in introductory calculus. To execute this symbolic computation, the use of computer algebra systems will be a good option. It should be noted, however, that some implementation tricks to process such custom derivatives are required (in other words, the term-rewriting system should be customized). The results are shown below. Although these expressions appear complex at first glance, the code generation system can automatically generate code for such expressions. L[(−f̄[)(xt, t) = ( β2t 4 − β̇t 2 ) xt + ( β̇t 2 √ νt − β 2 t 4ν 3/2 t ) Sθ(xt, t) (64) L](−f̄])(xt, t) = ( β2t 4 − β̇t 2 ) xt + β̇t√ νt Sθ(xt, t) (65) G](−f̄])(xt, t) = ( 1 2 − 1 νt ) β 3/2 t (66) L]g(t) = − β̇t 2 √ βt (67) G]g(t) = 0. (68) We may also compute higher order derivatives, though we do not use them in this paper except L[L[(−f̄[), L[L[(−f̄[)(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt + ( β3t (−ν2t + 3νt − 3) 8ν 5/2 t + 3βtβ̇t 4ν 3/2 t − β̈t 2 √ νt ) Sθ(xt, t) (69) L]L](−f̄])(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt − β3t + 4β̈t 4 √ νt Sθ(xt, t) L]G](−f̄])(xt, t) = √ βt ν2t ( νt(2β 2 t + 3β̇t) 2 − β2t − 3ν2t β̇t 4 ) G]L](−f̄])(xt, t) = √ βt ( β2t 4 − β̇t 2 + β̇t νt ) G]G](−f̄])(xt, t) = 0 L]L]g(t) = 2βtβ̈t − β̇2t 4β 3/2 t L]G]g(t) = 0 G]L]g(t) = 0 G]G]g(t) = 0. As we can see, no factors other than integers, xt, Sθ(xt, t), νt, βt and derivatives of βt appear. This is also true for higher order derivatives, which can be easily shown. SymPy Code Snippet for Automatic Symbolic Computation of Derivatives The following code snippet is a minimalistic example of SymPy code to compute the above derivatives using the customized derivative method. We used SymPy 1.11 to test the following code snippet. from sympy import Function, symbols, sqrt, simplify x, t = symbols(’x t’) # x, t B = Function(’beta’) # βt # define customized derivatives of νt class nu(Function): def fdiff(self, argindex=1): t, = self.args return (1 - nu(t)) * B(t) # ν̇t = (1− νt)βt # define customized derivatives of Sθ(x, t) class S_theta(Function): def fdiff(self, argindex=1): x, t = self.args if argindex == 1: # ∂/∂x d = 1 / sqrt(nu(t)) elif argindex == 2: # ∂/∂t d = (x - S_theta(x, t)/sqrt(nu(t))) * B(t) / (2 * sqrt(nu(t))) return d # define f̄[ class f_flat(Function): @classmethod def eval(cls, x, t): return - B(t) * x / 2 + S_theta(x, t) * B(t) / (2 * sqrt(nu(t))) # define differential operator L[ class L_flat(Function): @classmethod def eval(cls, fxt): return -fxt.diff(t) - f_flat(x, t) * fxt.diff(x) # show each derivative print(f_flat(x, t)) print(simplify(L_flat(f_flat(x,t)))) # L[ f̄[(xt, t); see eq. (64) print(simplify(L_flat(L_flat(f_flat(x,t))))) # L[L[ f̄[(xt, t); see eq. (69), # we can similarly define f̄], L], G] and compute other derivatives. The result will look like [Out 1] − xβ(t) 2 + Sθ(x, t)β(t) 2 √ ν(t) [Out 2] − xβ 2(t) 4 + x ddtβ(t) 2 + Sθ(x, t)β 2(t) 4ν 3 2 (t) − Sθ(x, t) d dtβ(t) 2 √ ν(t) [Out 3] − xβ 3(t) 8 + 3xβ(t) ddtβ(t) 4 − x d2 dt2 β(t) 2 + Sθ(x, t)β 3(t) 8 √ ν(t) − 3Sθ(x, t)β 3(t) 8ν 3 2 (t) + 3Sθ(x, t)β 3(t) 8ν 5 2 (t) − 3Sθ(x, t)β(t) d dtβ(t) 4ν 3 2 (t) + Sθ(x, t) d2 dt2 β(t) 2 √ ν(t) and so on. Some additional coding techniques can further improve the readability of these expressions, but there will be no need to go any deeper into such subsidiary issues here. Thus obtained symbolic expressions can be automatically converted into executable code in practical programming languages including Python and C++ using a code generator, though the authors hand-coded the obtained expressions in Python for the experiments in this paper. C TRUNCATED DDIM IS EQUIVALENT TO THE QUASI-TAYLOR SAMPLER Using SymPy, we can easily compute the Taylor expansion of a given function. For example, the following code sympy.series(B(t+h), h, 0, 4) yields the result like β(t) + h d dξ1 β(ξ1) ∣∣∣∣ ξ1=t + h2 d 2 dξ21 β(ξ1) ∣∣∣ ξ1=t 2 + h3 d 3 dξ31 β(ξ1) ∣∣∣ ξ1=t 6 +O ( h4 ) . Similarly, using the relation ν̇t = (1− νt)βt, we can easily compute the Taylor expansion of νt−h as follows. sympy.series(nu(t-h), h, 0, 3) νt−h = ν(t)+h (β(t)ν(t)− β(t))+h2 β2(t)ν(t) 2 − β 2(t) 2 − ν(t) ddξ1 β(ξ1) ∣∣∣ ξ1=t 2 + d dξ1 β(ξ1) ∣∣∣ ξ1=t 2 +O (h3) Using this functionality of SymPy, we can easily compute the Taylor expansion of the DDIM (Song et al., 2020a). Let us recall that the DDIM algorithm is given by eq. (15), and using our notation α = √ 1− ν and σ = √ν, it can be written as follows, DDIM: xt−h ← √ 1− νt−h 1− νt︸ ︷︷ ︸ =:ρDDIMt,h xt + (√ νt−h − √ 1− νt−h 1− νt νt ) ︸ ︷︷ ︸ =:µDDIMt,h Sθ(xt, t). Then using SymPy, the Taylor expansion of ρDDIMt,h and µ DDIM t,h are computed as follows, ρDDIMt,h = 1 + βt 2 h− h 2 4 ( β2t 2 − β̇t ) + h3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + o(h3), (70) √ νtµ DDIM t,h = − βt 2 h+ h2 4 ( β̇t − β2t 2νt ) + h3 4 ( −β 3 t 12 + β3t 4νt − β 3 t 4ν2t + βtβ̇t 2νt − β̈t 3 ) + o(h3). (71) Although it has been known that DDIM corresponds to the Euler method up to 1st order terms (Song et al., 2020a; Salimans & Ho, 2022), this expansion gives better understanding of higher order terms. That is, these are exactly equivalent to our deterministic Quasi-Taylor sampler eq. (23) and eq. (24) up to 3rd-order terms. This fact may suggest that the assumptions behind the DDIM derivation will be logically equivalent to our assumptions of ideal derivatives. The advantage of the proposed Quasi-Taylor method is that we can decide the hyperparameter at which order the Taylor expansion is truncated. On the other hand, DDIM automatically incorporates terms of much higher order, leaving no room for order tuning. D ON THE NOISE SCHEDULE D.1 BACKGROUND: PICARD-LINDELÖF THEOREM Let us consider a 1-dim deterministic system ẋ(t) = a(x(t), t). It is well known that this ODE has a unique solution if a(x, t) is Lipschitz continuous w.r.t. x and continuous w.r.t. t (Picard-Lindelöf Theorem). Otherwise, ODEs often behave less favorably. (Similar Lipschitz conditions are also required for SDEs.) Example 1. For example, the ODE ẋ = x2, x(0) = 1 has the solution x = 1/(1− t) when t < 1, and it blows up at t = 1. It is usually impossible to consider what happens after t > 1 in ordinary contexts. Example 2. Another well-known example is ẋ = √ x, x(0) = 0. It has a solution x = t2/4, but x ≡ 0 is also a solution. It actually has infinitely many solutions x = 0 (if t ≤ t0), x = (t− t0)2/4 (if t > t0), where t0 ≥ 0 is an arbitrary constant. Example 3. Let us consider the following ODE ẋ = − t− 1 1− e−(t−1)2 x, x(0) = 1, (72) which is a simplified model of the Linear schedule eq. (76). The exact solution is as follows, x = √ e− 1√ e(t−1)2 − 1 , (73) which diverges at t = 1. In this case, a(x, t) = −x·(t−1)/(1−e−(t−1)2) is not Lipschitz continuous, as the Taylor expansion of the denominator is 1− e−(t−1)2 = (t− 1)2 +O((t− 1)4), and a(x, t) is approximately −x/(t− 1) near t = 1. In these cases, the coefficient a(·, ·) is not Lipschitz continuous. Even these seemingly simplest ODEs behave very complexly unless the coefficients are carefully designed. In PF-ODE, the Lipschitz condition is written as follows, Lip(f̄[) = ∣∣∣∣∂xt (βt2 xt − βt2√νtSθ(xt, t) )∣∣∣∣ <∞. (74) Using the ideal derivative of Sθ(xt, t), this condition translates as Lip(f̄[) = |βt(1− 1/νt)| = ∣∣∣∣ ν̇tνt ∣∣∣∣ <∞. (75) D.2 SPECIFIC SCHEDULES Including this point, the necessary conditions for a variance schedule νt will be summarized as follows. 1. ν0 ≈ 0 so that the initial density p(x0, 0) is close to the true data density. 2. νT ≈ 1 so that the terminal density p(xT , T ) is close to the Gaussian. 3. Sufficiently smooth so that βt = − ddt log(1− νt) is well defined. • In addition, βt should also be smooth so that the Taylor schemes can be used. 4. Monotonic (s < t =⇒ νs ≤ νt) to make βt non-negative. 5. Preferably, make the drift coefficient f̄[ Lipschitz continuous so that PF-ODE has a unique solution, i.e., Lip(f̄[) ≈ |ν̇t/νt| <∞. The following two scheduling functions which are common in diffusion generative models satisfy the conditions 1, 2, 4 above (the linear schedule also satisfies the 3rd condition), Linear: νt = 1− e−β0t−β1t 2 , βt = β0 + 2β1t, (76) Cosine: νt = 1− C cos2 ( π 2 t/T + ς 1 + ς ) , βt = { π T tan ( π 2 t/T+ς 1+ς ) if 0 ≤ t ≤ T ′ Θ if T ′ < t ≤ T . (77) where ς > 0 is a small constant, C = 1/ cos2(πς/2(1 + ς)) is a constant to make ν0 = 0, and the threshold constant is Θ = βT ′ . However, these common schedules do not satisfy the 5th condition that the drift coefficient f̄[ is Lipschitz continuous. Indeed, it is easily verified that limt→0 ν̇t/νt =∞ in both cases, since ν0 = 0 but ν̇0 > 0. Nevertheless, t = 0 is the only singular point, and since no function value or derivative at t = 0 is evaluated by numerical methods (except by the Runge-Kutta method), this point can practically be ignored. Note that, we can also consider some other schedule functions such as the sigmoid function and the hyperbolic tangent, which satisfy the condition 2, 3, 4, 5 but do not satisfy the 1st condition rigorously (but if ν0 is less than or equal to the level of the quantization error in the data, we may consider the first condition to be essentially satisfied), Sigmoid: νt = 1 1 + e−A(t−k) , βt = Aνt, (78) Modified Tanh: νt = tanh2(λ(t)/2), βt = λ̇(t) tanh(λ(t)/2), (79) where the parameter function λ(t) has some options, such as λ(t) = log(1 + Aekt), and A > 0, k > 0 are hyperparameters. D.3 HOW TO AVOID THE TIME ORIGIN SINGULARITY IN THE RUNGE-KUTTA METHODS When using the Heun and Classical RK4 methods, the function f̄[(xt, t) is evaluated at time t = 0. However, since the function f̄[(xt, t) contains the term proportional to 1/ √ νt, it will diverge at time t = 0 if the linear eq. (76) or cosine schedule eq. (77) is used. The simplest way to avoid this is to replace the function f̄[(x0, 0) with f̄[(xε, ε) where ε > 0 is a sufficiently small constant, only when the need to evaluate the function at time t = 0 arises. The same thing could happen at t = T if the cosine schedule and DDIM were used simultaneously, but this can be handled in the same way. If we use the sigmoid eq. (78) or modified tanh schedules, eq. (79) these problems do not occur unless the hyperparameters A and k are chosen to be very extreme values. E SUPPLEMENT ON FUNDAMENTALS For convenience, let us summarize some basics behind the ideas in this paper. The contents of this section are not particularly novel, but the authors expect that this section will give a better understanding of the ideas of this paper and the continuous-time approach to diffusion generative models. E.1 TAYLOR EXPANSION AND ITÔ-TAYLOR EXPANSION E.1.1 TAYLOR EXPANSION OF DETERMINISTIC SYSTEMS 1-dimensional case Let us first consider a 1-dim deterministic system ẋ(t) = a(x(t), t), where a(·, ·) is sufficiently smooth, and let us derive the Taylor series expression of the solution of this ODE. Let ϕ(x(t), t) be a differentiable function. Its total derivative is written as dϕ = ∂ϕ ∂t dt+ ∂ϕ ∂x dx = ∂ϕ ∂t dt+ ∂ϕ ∂x dx dt dt = ( ∂ϕ ∂t + ∂ϕ ∂x a(x, t) ) dt = ( ∂ ∂t + a(x, t) ∂ ∂x ) ︸ ︷︷ ︸ =:L[ ϕdt. (80) By integrating both sides from 0 to t, we have ϕ(x(t), t) = ϕ(x(0), 0) + ∫ t 0 (L[ϕ)(x(s), s)ds. (81) We use this formula recursively to obtain the Taylor series of the above system. Let ϕ(x(t), t) = x(t), then we have x(t) = x(0) + ∫ t 0 (L[x)(x(s), s)ds = x(0) + ∫ t 0 a(x(s), s)ds. (82) Let ϕ(x(t), t) = a(x(t), t), then we have a(x(t), t) = a(x(0), 0) + ∫ t 0 (L[a)(x(s), s)ds. (83) Using the above two
1. What is the focus of the paper regarding diffusion models? 2. What are the strengths and weaknesses of the proposed approach in improving sample efficiency? 3. Do you have any concerns or questions about the experimental results and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes an approach to improving the sample efficiency of diffusion models by incorporating high-order derivatives. As high-order derivatives are often expensive to compute, the authors propose an approximation. Empirically, the proposed approach is able to generate images using small number of sampling steps. Strengths And Weaknesses Strength: This work considers an important direction of diffusion models: as diffusion models are known for their inefficiency in sampling, improving the sampling speed is of great interest to the community. Using high-order derivates to improve the sampling speed is an interesting and promising direction. Weakness: The writing needs to be improved. There are many equations with parameters/notations that are not properly defined in Section 3.4. The main algorithms (Algorithms 1, 2) are not properly justified or explained with rigorous proof. The method formulation is hard to follow. Proposition 1 is not rigorously stated. For instance, what does "in many cases" mean? What does an "arbitrary vector" mean? What would be the dimension of the vector? What would be the domain? Although in the left figure in figure 4, the proposed approach (Taylor 2nd, 3rd) is able to achieve better performance than the baselines, the performance degrades after 20 steps. Why is that the reason? At the same time, the DDIM performance reported in figure 2, 4 is much worse than the one reported in the original DDIM paper: in the original paper, DDIM is able to achieve an FID of 13.36 on CIFAR-10, and 17.33 on CelebA 64 using 10 steps---a performance better than all of the proposed methods using 10 steps. However, in both figure 2 and figure 4, the reported performance for DDIM is much worse. My understanding is that this can be caused by using different noise scheduling. If that is the case, the comparison in figure 2 and 4 might not be fair for the baselines. It seems that noise scheduling will affect the performance. How do you select the noise scheduling parameters? It is hard to tell the difference between the samples from DDIM and the ones from the proposed method in Figure 1. At the same time, the samples seem to have shifted color. If it is because of not having enough sampling steps, then it would be better to visualize samples with higher quality but using more steps. Clarity, Quality, Novelty And Reproducibility Although this paper considers a very interesting direction that could potentially have novelty, the clarity of the writing and the quality of the experiments need to be improved.
ICLR
Title Quasi-Taylor Samplers for Diffusion Generative Models based on Ideal Derivatives Abstract Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed. This paper proposes new efficient samplers based on the numerical schemes derived by the familiar Taylor expansion, which directly solves the ODE/SDE of interest. In general, it is not easy to compute the derivatives that are required in higher-order Taylor schemes, but in the case of diffusion models, this difficulty is alleviated by the trick that the authors call “ideal derivative substitution,” in which the higher-order derivatives are replaced by tractable ones. To derive ideal derivatives, the authors argue the “single point approximation,” in which the true score function is approximated by a conditional one, holds in many cases, and considered the derivatives of this approximation. Applying thus obtained new quasi-Taylor samplers to image generation tasks, the authors experimentally confirmed that the proposed samplers could synthesize plausible images in small number of NFEs, and that the performance was better or at the same level as DDIM and Runge-Kutta methods. The paper also argues the relevance of the proposed samplers to the existing ones mentioned above. 1 INTRODUCTION Generative modeling based on deep neural networks is an important research subject for both fundamental and applied purposes, and has been a major trend in machine learning studies for several years. To date, various types of neural generative models have been studied including GANs (Goodfellow et al., 2014), VAEs (Kingma et al., 2021; Kingma & Welling, 2019), normalizing flows (Rezende & Mohamed, 2015), and autoregressive models (van den Oord et al., 2016b;a). In addition to these popular models, a class of novel generative models based on the idea of iteratively refinement using the diffusion process has been rapidly gaining attention recently as a challenger that rivals the classics above (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Song & Ermon, 2020; Ho et al., 2020; Dhariwal & Nichol, 2021). The diffusion-based generative models have recently been showing impressive results in many fields including image (Ho et al., 2020; Vahdat et al., 2021; Saharia et al., 2021; Ho et al., 2021; Sasaki et al., 2021), video (Ho et al., 2022), text-to-image (Nichol et al., 2021; Ramesh et al., 2022), speech (Chen et al., 2020; 2021; Kong et al., 2021; Popov et al., 2021; Kameoka et al., 2020), symbolic music (Mittal et al., 2021), natural language (Hoogeboom et al., 2021; Austin et al., 2021), chemoinformatics (Xu et al., 2022), etc. However, while the diffusion models have good synthesis quality, it has been said that they have a fatal drawback that they often require a very large number of iterations (refinement steps) during synthesis, ranging from hundreds to a thousand. In particular, the increase in refinement steps critically reduces the synthesis speed, as each step involves at least one neural function evaluation (NFE). Therefore, it has been a common research question how to establish a systematic method to stably generate good data from diffusion models in a relatively small number of refinement steps, or NFEs in particular. From this motivation, there have already been some studies aiming at reducing the NFEs (See § 2). Among these, Probability Flow ODE (PF-ODE) (Song et al., 2020b) enable efficient and deterministic sampling, and is gaining attention. This framework has the merit of deriving a simple ODE by a straightforward conceptual manipulation of diffusion process. However, the ODE is eventually solved by using a black-box Runge-Kutta solver in the original paper, which requires several NFEs per step and is clearly costly. Another PF-ODE solver includes DDIM (Song et al., 2020a), and is also commonly used. It is certainly efficient and can generate plausible images. However, it was not originally formulated as a PF-ODE solver, and the relationship between DDIM and PF-ODE is not straightforward. From these motivations, we provide another sampler to solve the same ODE, which performs better than or on par with DDIM. The derivation outline is simple and intuitive: (1) consider the Taylor expansion of the given system, and (2) replace the derivatives in the Taylor series with appropriate functions; that’s all. The contribution of this paper would be as follows: (1) We propose novel samplers for diffusion models based on Taylor expansion of PF-ODE. They outperformed, or were on par with RungeKutta methods. (2) To derive our algorithms, we show that the derivatives of score function can be approximated by simple functions. We call this technique the ideal derivative substitution. (3) It has been known that the 1st order term of DDIM is same as the Euler method for PF-ODE. This paper gives further explanation for higher order terms of DDIM: we show that the proposed Quasi-Taylor method and DDIM are identical at least up to 3rd order terms. (4) The same idea can be naturally extended to derive a stochastic solver for a reverse-time SDE, which we call R-SDE in this paper. 2 BACKGROUND AND RELATED WORK Diffusion Process to draw a new data from a target density: Let us first briefly summarize the framework of the diffusion-based generative models. Following Song et al. (2020b), we describe the mechanisms using the language of continuous-time diffusion process for later convenience. Let us consider “particles” {xt} moving in a d-dim space obeying the following Itô diffusion, SDE: dxt = f(xt, t)dt+ g(xt, t)dBt, (1) where Bt is the d-dim Brownian motion whose temporal increments obeys the standard Gaussian. The drift f(·, ·) is d-dim vector, and the diffusion coefficient g(·, ·) is scalar. The SDE describes the microscopic dynamics of each particle. On the other hand, the “population” of the particles obeying the above SDE, i.e. density function p(xt, t | xs, s), (t > s), follows the following PDEs, which are known as Kolmogorov’s forward and backward equations (KFE and KBE); the former is also known as the Fokker-Planck equation (FPE), see § E.2, FPE: ∂tp(xt, t | xs, s) = −∇xt · f(xt, t)p(xt, t | xs, s) + ∆xt g(xt, t) 2 2 p(xt, t | xs, s), (2) KBE: −∂sp(xt, t | xs, s) = f(xs, s) · ∇xsp(xt, t | xs, s) + g(xs, s) 2 2 ∆xsp(xt, t | xs, s), (3) where ∆x := ∇x ·∇x is Laplacian. (FPE also holds for p(xt, t); consider the expectation Ep(xs,s)[·].) These PDEs enables us to understand the macroscopic behavior of the particle ensemble. For example, if f(x, t) = −∇U(x), g(x, t) = √ 2D, where U(x) a certain potential and D a constant, then we may verify that the stationary solution of FPE is p(x) ∝ e−U(x)/D. It means that we may draw a sample x that follows the stationary density by evolving the SDE over time. This technique is often referred to as the Langevin Monte Carlo method (Rossky et al., 1978; Roberts & Tweedie, 1996). Some of the diffusion generative models are based on this framework, e.g. (Song & Ermon, 2019; 2020), in which the potential gradient∇U(x) is approximated by a neural network. Another systematic approach is considering the reverse-time dynamics (Song et al., 2020b). An approach is based on KBE eq. (3). Roughly speaking, FPE gives information about the future from the initial density, while KBE gives information about what the past states were likely to be from the terminal density. Here, instead of using KBE directly, it is useful to consider a variant of it which is transformed into the form of FPE, because it has an associated SDE that enables the particle-wise backward sampling (Stratonovich, 1965; Anderson, 1982); see also § E.3.2, R-FPE: −∂sp(xs, s | xt, t) = ∇xs · f̄(xs, s)p(xs, s | xt, t) + ∆xs ḡ(xs, s) 2 2 p(xs, s | xt, t) (4) R-SDE: dxs = −f̄(xs, s)(−ds) + ḡ(xs, s)dB̄s. (5) Hereafter, let g(xt, t) = g(t) for simplicity. Then the specific forms of drift and diffusion coefficients are written as follows, R-SDE coeffs: f̄(xt, t) = f̄](xt, t) := f(xt, t)− g(t)2∇xt log p(xt, t), ḡ(t) = g(t). (6) Starting from a certain random variable xT , then by evolving the R-SDE reverse in time, we may obtain a x̂0 which follows p(x0, 0 | xT , T ) (i.e. the solution of R-FPE eq. (4)). Therefore, if the initial density p(x0, 0) of the forward dynamics eq. (2) is the true density, then we may utilize this mechanism as a generative model to draw a new sample x̂0 from it. Another approach is based on FPE eq. (2). By formally eliminating the diffusion term of the FPE for the forward process, we can derive another backward FPE (see also § E.3.1). Being diffusionfree, the backward FPE yields a deterministic ODE, which is called the Probability Flow ODE (PF-ODE) (Song et al., 2020b), and is an example of neural ODEs (Chen et al., 2018). The population density obtained by evolving this system is exactly the same as the above R-SDE. PF-ODE coeffs: f̄(xt, t) = f̄[(xt, t) := f(xt, t)− 1 2 g(t)2∇xt log p(xt, t). ḡ(t) = 0. (7) Some extensions of this framework include as follows. Dockhorn et al. (2021) introduced the velocity variable considering the Hamiltonian dynamics. Another extension is the introduction of a conditioning parameter, and guidance techniques using it (Dhariwal & Nichol, 2021; Ho & Salimans, 2021; Choi et al., 2021) to promote the dynamics to go to a specific class of images, which has achieved remarkable results in text-to-image tasks (Nichol et al., 2021; Ramesh et al., 2022). Variance-Preserving Model (VP-SDE Model): The solution of unconditioned FPE is written as the convolution with the initial density p(x0, 0) and the fundamental solution, or the heat kernel, p(xt, t | x0, 0), which is the solution of the conditional FPE under the assumption that the initial density was delta function, p(x0, 0) = δ(x0−x∗0). Although it is still intractable to solve this problem in general, a well-known exception is the (time-dependent) Ornstein-Uhlenbeck (OU) process where f(xt, t) = − 12βtxt and g(xt, t) = √ βt. βt = β(t) is a non-negative continuous function. The specific form of diffusion coefficient βt has some options: a simplest one would be the linear function, and another would be the cosine schedule proposed in (Nichol & Dhariwal, 2021); see also § D. In any cases, if it is the OU process, the heat kernel is simply written as follows, p(xt, t | x0, 0) = N (xt | √ 1− σ2t x0, σ2t I), where σ2t = 1− exp ( − ∫ t 0 βt′dt ′ ) . (8) Hereafter, we denote the noise variance by νt := σ2t . (In some literature, the signal level αt :=√ 1− σ2t is used as a basic parameter instead of the variance.) This model is referred to as the variance-preserving (VP) model by Song et al. (2020b). It has good properties such as the scale of data ‖xt‖2 is almost homogeneous, which is advantageous in neural models. However, the variance exploding (VE) model (Song et al., 2020b) in which the norm increases is also practicable, and the theory can be developed in a similar manner. Training Objective: In diffusion-based generative models, one estimates the score function ∇xt log p(xt, t) = ∇xt logEp(x0,0)[p(xt, t | x0, 0)] by a neural network Sθ(xt, t). This sort of learning has been referred to as the score matching (Hyvärinen & Dayan, 2005; Vincent, 2011). However, the exact evaluation of this training target is clearly intractable because of the expectation Ep(x0,0)[·], so it has been common to consider a Variational Bayesian surrogate loss; Ho & Salimans (2021) showed that the following loss function approximates the negative ELBO, L := E[‖−√νt∇xt log p(xt, t | x0, 0)− Sθ(xt, t)‖22] = E[‖xt− √ 1−νtx0√ νt − Sθ(xt, t)‖22] (9) = E[‖w − Sθ( √ 1− νtx0 + √ νtw, t)‖22], (10) where the expectation in eq. (10) is taken w.r.t. x0 ∼ D, w ∼ N (0, I), and t ∼ Uniform([0, T ]). Some variants of the score matching objectives are also studied. For example, Chen et al. (2020) reported that the L1 loss gave better results than the L2 loss in speech synthesis. Also, Kingma et al. (2021) argued that the weighted loss with SNR-based weights improves the performance. It should be noted that the above loss function will actually be very close to the ideal score matching loss function in practice, where the probability is not conditioned on x0, i.e., Lideal = E[‖− √ νt∇xt log p(xt, t)− Sθ(xt, t)‖22]. (11) This is because there almost always exists a point x0 on the data manifold such that∇xt log p(xt, t) ≈ ∇xt log p(xt, t | x0, 0) holds with very high accuracy in very high-dim cases, because of the wellknown “log-sum-exp ≈ max” law. For more details, see § 3.3 and § A. Sampling Schemes for R-SDE and PF-ODE: Thus obtained Sθ(xt, t) is expected to finely approximate −√νt∇xt log p(xt, t), and we may use it in eq. (5). One of the simplest numerical schemes for solving SDEs is the Euler-Maruyama method (Maruyama, 1955, Theorem. 1) as follows, and many diffusion generative models are actually using it. Euler-Maruyama: xt−h ← xt − hf̄](xt, t) + √ hg(t)w, where w ∼ N (0, I) (12) where h > 0 is the step size. The error of the Euler-Maruyama method is the order of O( √ h) in general, though it is actually O(h) in our case; this is because ∇xtg(t) = 0. As a better solver for the R-SDE, the Predictor-Corrector (PC)-based sampler was proposed in (Song et al., 2020b). The PC sampler outperformed the Predictor-only strategy, but it requires many NFEs in the correction process, so we will exclude it in our discussion. Another R-SDE solver is the one proposed by Jolicoeur-Martineau et al. (2021), whose NFE per refinement step is 2. On the other hand, there are also deterministic samplers for PF-ODE eqs. (5), (7) as follows, Euler: xt−h ← xt − hf̄[(xt, t) (13) Runge-Kutta: xt−h ← xt − h ∑m i=1 biki, where ki = f̄[(xt − h ∑i−1 j=1 aijkj , t− hci) (14) where {aij}, {bi}, {ci} are coefficients of the Runge-Kutta (RK) method (see § E.5). The error of the Euler method is O(h), and that of the RK method is O(hp), p ≤ m in general (Press et al., 2007, § 16). Another deterministic sampler is DDIM (Song et al., 2020a, Eq. (13)), and is also understood as a PF-ODE solver (Salimans & Ho, 2022). Its NFE per step is only 1, and is capable of efficiently generate samples. DDIM: xt−h ← αt−hαt xt + ( σt−h − αt−hαt σt ) Sθ(xt, t). (15) In addition, as a concurrent work as ours, Lu et al. (2022) proposed the DPM-solver, which is based on the Taylor expansion of PF-ODE. However, as the gradient is evaluated using several different points, the NFE per step is greater than 1 in general. Liu et al. (2022) proposed a sampler based on the linear multi-step method, in which the NFE/step is reduced to 1 except initial 3 steps. Another PF-ODE solver is the DEIS (Zhang & Chen, 2022) which is based on the exponential integrator with some non-trivial approximations such as the polynomial interpolation of score function. Other techniques that aimed to make sampling faster include as follows. Song & Ermon (2020) proposed a variety of techniques to accelerate the sampling. Watson et al. (2021) proposed a DP-based optimization method to tune noise schedules for faster sampling. Luhman & Luhman (2021) and Salimans & Ho (2022) proposed distilling the pretrained teacher model to a student model that can predict teacher’s several steps in a single step, which is efficient during the sampling but extra training for distillation is required. Bao et al. (2022a;b) derived some analytic expressions of reverse dynamics to enable faster sampling. 3 PROPOSED METHOD: QUASI-TAYLOR SAMPLERS 3.1 MOTIVATION: HIGHER-ORDER STRAIGHTFORWARD SOLVERS FOR R-SDE AND PF-ODE As mentioned above, DDIM already exists as an efficient solver for PF-ODE, but it can only be considered a PF-ODE solver up to first-order terms (Song et al., 2020a; Salimans & Ho, 2022), and it would not be clear enough whether it can be considered a higher-order solver for PF-ODE. Some other techniques (Lu et al., 2022; Liu et al., 2022; Zhang & Chen, 2022) were designed as higher-order PF-ODE solvers, though their derivations are rather sophisticated and less simple. Since PF-ODE and R-SDE provide the basis for the diffusion generative models, it would be beneficial to develop samplers that directly solve them through intuitive and straightforward arguments. From these motivations, we propose a simple but efficient sampler based on the Taylor expansion, a very basic technique that is familiar to many researchers and practitioners. In general, Taylor methods are not very popular as numerical schemes because they require higher-order derivatives, which are not always tractable. However, in diffusion models, the derivatives are easily and effectively evaluated, albeit approximately. The validity of this approximation requires some consideration (see § A, § B), but once accepted, an efficient sampler can be derived simply by substituting this approximation formula into the Taylor series. This section describes the details of the idea, and derives solvers for both PF-ODE and R-SDE. Entire sampling procedures are summarized in § F. 3.2 TAYLOR SCHEME FOR ODE AND ITÔ-TAYLOR SCHEME FOR SDE Taylor Scheme for Deterministic Systems For simplicity, we consider the 1-dim case here, but we can easily generalized it to multidimensional cases. (See § E.1.1.) Given a ODE ẋt = a(xt, t), where the function a is sufficiently smooth, then we can consider the Taylor expansion of it, using a differential operator L[ := ( ∂t + a(t, xt)∂xt ) . We can write the Taylor expansion of the path xt as follows. Ignoring o(hp) terms of the series, we obtain a numerical scheme of order p. xt+h = xt + ha(xt, t) + h2 2! L[a(xt, t) + h3 3! L2[a(xt, t) + · · · . (16) Itô-Taylor Scheme for Stochastic Systems In stochastic systems, the Taylor expansion requires modifications because of the relation E[dB2t ] = dt. If xt obeys a stochastic system dxt = a(xt, t)dt+ b(xt, t)dBt, then the path is written in a stochastic version of Taylor-like series, which is often called the Itô-Taylor expansion, a.k.a. Wagner-Platen expansion (Platen & Wagner, 1982);(Kloeden et al., 1994, § 2.3.B);(Särkkä & Solin, 2019, § 8.2). The Itô-Taylor expansion is based on the following differential operators L], G], which are based on Itô’s formula (Itô, 1944). L] := ∂t + a(x, t)∂x + 1 2 b(x, t)2∂2x, G] := b(x, t)∂x (17) In (Kloeden & Platen, 1992), a number of higher order numerical schemes for SDEs based on the Itô-Taylor expansion are presented. One of the simplest of them is as follows. See also § E.1.2. Theorem 1 (Kloeden & Platen (1992, § 14.2): An Itô-Taylor scheme of weak order β = 2). Let xt obeys the above SDE, and let the differential operators L], G] be given by eq. (17). Then, the following numerical scheme weakly converges with the order of β = 2 (see § E.4). Furthermore, in a special case where G2]b ≡ 0, the strong γ = 1.5 convergence is also guaranteed (Kloeden & Platen, 1992, § 10.4). xt+h ← xt + ha+ w̃tb+ w̃2t − h 2 G]b+ h2 2 L]a+ (w̃th− z̃t)L]b+ z̃tG]a (18) where w̃t = √ hwt, z̃t = h √ hzt are correlated Gaussian random variables, and wt, zt are given by wt = u1 and zt = 12u1 + 1 2 √ 3 u2, where u1, u2 ∼ N (0, 1) (i.i.d.). The notations a, L]a, etc. are the abbreviations for a(xt, t), (L]a)(xt, t), etc. 3.3 SINGLE POINT APPROXIMATION OF THE SCORE FUNCTION Before proceeding, let us introduce the single point approximation of score function that ∇xt log p(xt, t) almost certainly has a some point x0 on the data manifold such that the following approximation holds, ∇xt log p(xt, t) = ∇xt log ∫ p(xt, t | x0, 0)p(x0, 0)dx0 ≈ ∇xt log p(xt, t | x0, 0). (19) To date, this approximation has often been understood as a tractable variational surrogate. However, the error between the integral and the single point approximation is actually very small in practical scenarios. More specifically, the following facts can be shown under some assumptions. 1. The relative L2 distance between ∇xt log p(xt, t) and ∇xt log p(xt, t | x0, 0) is bounded above by √ (1− νt)/νt for any point x0 on the “data manifold” in practical scenarios. 2. When the noise level is low νt ≈ 0, and the data space is sufficiently high-dimensional, the distant points far from xt do not contribute to the integral. If the data manifold is locally a k-dim subspace of the entire d-dim data space, where 1 k d, then the relative L2 distance is bounded above by around 2 √ k/d. Of course, the single point approximation is not always valid. In fact, the approximation tends to break down when the noise level νt is around 0.9 (SNR = (1− νt)/νt is around 0.1). In this region, the single point approximation can deviates from the true gradient by about 20% in some cases. Conversely, however, it would be also said that the error is as small as this level even in the worst empirical cases. For more details on this approximation, see § A. 3.4 IDEAL DERIVATIVE SUBSTITUTION In order to adopt the above Taylor schemes to our problem setting where the base SDE is eq. (5), and f̄], f̄[ are given by eqs. (6), (7), we need to consider the following differential operators. Note that the time evolves backward in time in our case, the temporal derivative should be −∂t, L[ = −∂t − ( f̄[(xt, t) · ∇xt ) , L] = −∂t − ( f̄](xt, t) · ∇xt ) + βt 2 ∆xt , G] = √ βt (1 · ∇xt) , where f̄[(xt, t) = − βt 2 xt + βt 2 √ νt Sθ(xt, t), f̄](xt, t) = − βt 2 xt + βt√ νt Sθ(xt, t). (20) It is not easy in general to evaluate expressions involving such many derivatives. Indeed, for example, L[(−f̄[) has the derivatives of the learned score function, viz. ∂tSθ(xt, t) and (• · ∇xt)Sθ(xt, t), which are costly to evaluate exactly, whether in approaches based on finite differences (as in (Lu et al., 2022)), back-propagation, or the JAX paradigm (Bradbury et al., 2018), because they eventually require extra evaluation of a deeply nested function other than Sθ(xt, t), and extra memory consumption. Fortunately, however, by using the trick which the authors call the “ideal derivative substitution", we may write all of the derivatives as a simple combination of known values, only consisting of xt,Sθ(xt, t), νt, βt and derivatives of βt, and no extra computation is needed. Since the score function has a single point approximation eq. (19) we may assume that the derivatives should ideally hold following equalities. For derivation, see § B.1. Conjecture 1 (Ideal Derivatives). Under assumptions in § A — i.e. the data space Rd is sufficiently high dimensional d 1, the data manifoldM⊂ Rd is also sufficiently high dimensional but much smaller than the entire space (1 dimM d),M is bounded,M is sufficiently smooth locally, and the variance parameter νt is close to 0 or 1; — then it is likely that the following approximations hold, where a ∈ Rd is an arbitrary vector. We call them the “ideal derivatives”. (a · ∇xt)Sθ(xt, t) = 1√ νt a, −∂tSθ(xt, t) = − βt 2 √ νt ( xt − Sθ(xt, t)√ νt ) . (21) To confirm the accuracy of this approximation, we compared empirical and ideal derivatives using MNIST (LeCun et al., 2010) and CIFAR10 (Krizhevsky, 2009). As a result, it was confirmed that the approximation of spatial derivative, i.e. (a · ∇), is usually very accurate; the cosine similarity between the empirical and ideal derivatives is nearly always > 0.99 (Figure 10). On the other hand, for the time derivative ∂t, it was confirmed that it is quite accurate when the time parameter t (and the variance νt) are small, but the error increases when the time parameter t (and the variance νt) become larger (Figure 9). See § B.2 for more details. 3.5 QUASI-TAYLOR AND QUASI-ITÔ-TAYLOR SCHEMES WITH IDEAL DERIVATIVES As we can see in § B.2, the ideal derivative approximation is sometimes very accurate while sometimes not. In any case, however, the error in the ideal derivative only affects the second or higher order terms of Taylor series, and it will not be the dominant error in the whole. As there is an overall correlation between the true and ideal derivatives, the advantages will outweigh the disadvantages on average, and we can regularly use this approximation on a speculative basis, even though there exist some cases where the approximation is not accurate. If we accept the ideal derivative approximation, we can formally compute the symbolic expressions for the derivatives L[(−f̄[), L](−f̄]), L](g), G](−f̄]) and G](g) that appear in the Taylor and ItôTaylor series by routine calculations, which can be easily automated by computer algebra systems such as SymPy (Meurer et al., 2017) as shown in § B.3. By substituting thus obtained symbolic expressions into the above Taylor series, we can derive Taylor schemes for both PF-ODE and R-SDE as follows. Algorithm 1 (Quasi-Taylor Sampler with Ideal Derivatives for PF-ODE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ [ t,hxt + µ [ t,hSθ(xt, t)/ √ νt,where (22) ρ[t,h = 1 + βth 2 + h2 4 ( β2t 2 − β̇t ) + h 3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + · · · , (23) µ[t,h = −βth2 + h 2 4 ( β̇t − β 2 t 2νt ) + h 3 4 ( β3t (−ν 2 t+3νt−3) 12ν2t + βtβ̇t2νt − β̈t 3 ) + · · · . (24) Using terms up to O(h2), the sampler will have 2nd-order convergence (henceforth referred to as Taylor 2nd), and using terms up to O(h3), the sampler will 3rd-order convergent (similarly, Taylor 3rd). If we use up to the O(h) terms, the algorithm is same as the Euler method. Algorithm 2 (Quasi-Itô-Taylor Sampler with Ideal Derivatives for R-SDE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ ] t,hxt + µ ] t,hSθ(xt, t)/ √ νt + n ] t,h,where (25) ρ]t,h = 1 + βt 2 h+ h2 4 ( β2t 2 − β̇t ) , µ]t,h = −βth+ β̇th 2 2 , (26) n]t,h = √ βt √ hwt + h 3/2 ( − β̇t 2 √ βt (wt − zt) + β 3/2 t (νt−2) 2νt zτ ) . (27) The Gaussian variables wt and zt have dimension-wise correlations, and each dimension is sampled similarly to Theorem 1. Computation Cost: At first glance, these algorithms may appear to be very complex. However, the computational complexity hardly increases compared to the Euler or Euler-Maruyama methods, because almost all of the computational cost is accounted for by the neural network Sθ(xt, t), and the costs for scalar values ρ•t,h, µ • t,h and noise generation n ] t,h are almost negligible. It should also be noted that these scalar values can be pre-computed and stored in the memory before synthesis. Thus the computational complexity of these methods are practically equal to Euler, Euler-Maruyama, and DDIM methods. Error from the Exact Solution of PF-ODE: The numerical error of the Quasi-Taylor method from the exact solution increases depending on the following factors: (1) The truncation error of the Taylor series in each step, i.e. O(hp+1), (2) The number of the steps i.e. O(1/h), (3) The training and generalization error of the score function, i.e. ≈ L, and (4) The average error between the true and ideal derivatives of the score function =: ‖δ‖. If the factors 3 and 4 could be zero, then the numerical error is the order of O(hp). Otherwise, the expected numerical error is roughly evaluated as follows, error = O ( h−1(hL+ h2(L+ ‖δ‖) + h3(L+ ‖δ‖) + · · ·+ hp+1) ) = O ( L+ h(L+ ‖δ‖) + h2(L+ ‖δ‖) + · · ·+ hp ) . (28) That is, the error of Euler method is O(L+ h), the Heun method (2nd order Runge-Kutta) will be O(L+hL+h2), and the Taylor-2nd method is O(L+h(L+‖δ‖)+h2). As long as L, ‖δ‖ > 0, the predominant O(h) term will not disappear. Therefore, the overall order of the error will not decrease even if we increase the order of Taylor series greater than p ≥ 3. Nevertheless, beyond such an order evaluation, specific coefficients in higher order terms can still affect the performance, which should be validated empirically. 4 IMAGE SYNTHESIS EXPERIMENT Experimental Configuration: In this section, we conduct experiments to verify the effectiveness of the methods developed in this paper. Specifically, we compare the performance of the Euler scheme eq. (13), Taylor 2nd & Taylor 3rd (Alg. 1), DDIM (Song et al., 2020a), and the Runge Kutta methods (Heun and RK4 § E.5; these are less efficient than others because of NFEs per step) for PF-ODE, as well as the Euler-Maruyama scheme eq. (12) and Itô-Taylor (Alg. 2) for R-SDE. The datasets we used were CIFAR-10 (32× 32) (Krizhevsky, 2009) and CelebA (64× 64) (Liu et al., 2015). The network structure was not novel but was based on an existing open source implementation; we used the “NCSN++” implemented in the official PyTorch code by Song et al. (2020b). The network consisted of 4 levels of resolution, with the feature dimension of each level being 128 → 128 → 256→ 256→ 256. Each level consisted of BigGAN-type ResBlocks, and the number of ResBlocks in each level was 8 (CIFAR-10) and 4 (CelebA). The loss function we used was the unweighted L2 loss similarly to (Ho et al., 2020). The optimizer was Adam (Kingma & Ba, 2014). The machine used for training was an in-house Linux server dedicated to medium-scale machine learning training with four GPUs (NVIDIA Tesla V100). The batch size was 256. The number of training steps was 0.1 M steps, and the training took about a day for each dataset. The noising schedule was also the same as the existing one, the default configuration of VP-SDE (Song et al., 2020b): βt = 0.1 + 19.9t and νt = 1− exp(−0.1t−9.95t2) eq. (76). The integration duration was T = 1, and the step size h was constant, i.e. h = T/N where N is the number of refinement steps. As a quality assessment metric, we used the Fréchet Inception Distance (FID) (Heusel et al., 2017). To evaluate FIDs, we used the pretrained Inception v3 checkpoint (Szegedy et al., 2016), and resized all images to 299× 299× 3 by bilinear interpolation before feeding them to the Inception network. For each condition, 10,000 images were randomly generated to compute the FID score. Note that in this experiment, the computational resources for training were limited, and training was stopped before it fully converged (only 0.1 M steps, while in some other papers the number of training steps was e.g. 1.3 M steps in (Song et al., 2020b)). Therefore, it would be necessary to observe relative comparisons between samplers rather than directly comparing these FID value to those presented in other papers. Results: Figure 1 and Figure 2 show random samples for each sampler. More examples are available in § G. The deterministic samplers considered in this paper generated plausible images much faster than the vanilla Euler-Maruyama sampler. Figure 3a and Figure 3b reports the FID scores. From these figures, the following observations can be made. First, the proposed Quasi-Taylor methods have about the same or slightly better than DDIM. The reason for this is discussed in the next section § 5. We also found that the Runge-Kutta methods reduces FID in fewer steps overall. However, they also hit bottom faster. This may be due to the effect of the singularity at the time origin (see § D) in the final step. (This can be seen in Figure 16. In the second right column, the Runge-Kutta methods produce images similar to the other deterministic samplers, but the rightmost ones seem to be slightly noisier than the others). Even though the ideal derivatives are only approximations and contain some errors, the convergence destinations of Quasi-Taylor methods were almost the same as the Runge-Kutta methods. This suggests that the error in the ideal derivatives is actually hardly a problem, because in regions where the approximation error is large, the state xt is noisy to begin with (e.g. left 2/3 figures in Figure 16), and the approximation error is negligible compared to the noise that was originally there. The proposed stochastic sampler (Itô-Taylor) also showed sufficiently competitive results, in terms of both FID scores and visual impression. Comparison of the figures in § G (e.g. Figure 21) confirms that the Itô-Taylor method empirically reaches almost the same target as Euler-Maruyama method much more accurately, and it could be expected to be a safe alternative to Euler-Maruyama method when stochastic sampling is important. 5 DISCUSSION: RELATIONSHIP WITH DDIM In the above experiment, the performance of the proposed Quasi-Taylor methods are found to be almost equivalent to that of DDIM. In fact, despite having distinctly different derivation logics, the proposed method and DDIM actually agree, at least up to the 3rd order terms of h. Therefore, it is not surprising the results are similar; and the smaller h is, the closer the results are. This can be quickly verified by doing a Taylor expansion of the coefficients of eq. (15), i.e., αt−hαt and (σt−h − αt−h αt σt), w.r.t. h. Although it is tedious to perform this calculation by hand, the computer algebra systems e.g. SymPy immediately calculate it. For this computation, see § C. This finding that truncating DDIM at the 2nd or 3rd order of h yields exactly the same algorithms as the proposed Quasi-Taylor methods may be a useful insight for DDIM users, even if it does not lead them to switch the regular sampler from DDIM to Quasi-Taylor. That is, it offers an option of truncating the higher-order terms of DDIM. 6 CONCLUDING REMARKS This paper proposed a Taylor-expansion approach for diffusion generative models, particularly the Probability Flow ODE (PF-ODE) and the reverse-time SDE (R-SDE) solvers. The assumptions to derive our sampler were minimalistic, and the derivation process was straightforward. We just substituted the derivatives in the Taylor series by ideal ones. The obtained Quasi-Taylor and Quasi-Itô-Taylor samplers performed better than or on par with DDIM and Runge-Kutta methods. This fact implicitly supports the validity of our approximations. Conversely, if we could find some examples where the Quasi-Taylor methods, DDIM and RK methods gave decisively different results, we might be able to gain a deeper understanding of the structure of data manifold and the fundamentals of diffusion models by investigating the causes of discrepancy. Reproducibility Statement Pseudocodes of the proposed methods are available in § F, and the derivation of the proposed method is described in § B.1, § B.3. The experiment is based on open source code with minimal modifications to match the proposed method, and all the data used in this paper are publicly available. Experimental conditions are elaborated in § 4. Ethics Statement As a final note, negative aspects of generative models are generally pointed out, such as the risk of reproducing bias and discrimination in training data and the risk of being misused for deep fakes. Since this method only provides a solution to existing generative models, it does not take special measures against these problems. Maximum ethical care should be taken in the practical application of this method. A.3 COMPARISON OF THE EMPIRICAL SCORE FUNCTION AND THE SINGLE POINT APPROXIMATION Let us empirically validate the accuracy of single point approximation using real data as follows, • D = {MNIST (LeCun et al., 2010) 60,000 samples}, • D = {CIFAR-10 (Krizhevsky, 2009) 50,000 samples}. Since the true score function cannot be determined without knowing the true density (which will be possible with synthetic data, but discussing such data will not be very interesting here), the empirical score function was calculated using the real data D above as follows, True Score = ∇ log p(xt, t) = Ep(x0)[q(x0 | xt)∇ log p(xt, t | x0, 0)] ≈ 1|D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)] =: Empirical Score. (45) The evaluation of empirical score function using the entire dataset is unrealistic if the dataset D is large, but it is feasible if D is a small dataset like MNIST and CIFAR-10. In order to evaluate the accuracy of single point approximation, we evaluated following three metrics. • Relative L2 error between the empirical score function and∇ log p(xt, t | x0, 0), • Cosine similarity between the empirical score function and∇ log p(xt, t | x0, 0), • Entropy of q(x0 | xt). Figure 6 shows the relative L2 distance, for both datasets. Figure 7 similarly show the distribution (random 10,000 trials) of the cosine similarity, and Figure 8 shows the entropy. Dashed curves indicate the bounds evaluated in eq. (31) and eq. (32). These figures show that the range of intermediate region between Phase (1) and Phase (2) will not have impact in practical situations since we do not evaluate the neural network Sθ(·, ·) in this range so many times (i.e., ᾱt ∼ 10−3 to 10−1 ⇔ νt ∼ 0.999 to 0.9). Moreover, the approximation accuracy is still very high even in this region. Furthermore, although MNIST and CIFAR-10 are quite “low-dimensional” for real-world images, approximations are established with such high accuracy. Therefore, it is expected to be established with higher accuracy for more realistic images. B ON THE IDEAL DERIVATIVE APPROXIMATION Thus, we can assume that the single point approximation almost always holds practically. −Sθ(xt, t)√ νt model≈ ∇xt log p(xt, t) almost equal≈ ∇xt log p(xt, t | x(i)0 , 0) = − xt − √ 1− νtx(i)0 νt . Therefore, we may also expect that the similar approximation will be valid for their derivatives. Of course, strictly speaking, such an expectation is mathematically incorrect. For example, let g(x) = f(x) + ε sinωx, then the difference g(x) − f(x) = ε sinωx goes to zero as ε → 0, but the difference of derivatives g′(x)− f ′(x) = εω cosωx does not if ω →∞ faster than 1/ε. If the error between them in the Fourier domain is written as E(ω) = G(ω) − F (ω), then the L2 error between the derivatives is ‖g′(x) − f ′(x)‖22 = ‖ωE(ω)‖22 × const (Parseval’s theorem). In other words, the single point approximation does not necessarily imply the ideal derivative approximation. If it is to be mathematically rigorous, it must be supported by other nontrivial knowledge on the data manifold. This nontrivial leap is the most important “conjecture” made in this paper and its theoretical background should be more closely evaluated in the future. B.1 DERIVATION OF THE “IDEAL DERIVATIVES” Because of the discussion in § A, the true score function ∇xt log p(xt, t) is finely approximated by a single point approximation ∇xt log p(xt, t | x0, 0). Now we may also assume that the derivatives of both will also be close. In this paper, we are interested in the Taylor expansion of the following form (see also § E.1.1), ψ(xh, h) = ψ(x0, 0) + ∞∑ k=1 hk k! (∂t + a(xt, t) · ∇xt)k ψ(xt, t) ∣∣∣∣ t=0 . (46) If the function ψ(xt, t) is separable in each dimension (i.e., ∂xiψj = 0 for i 6= j), the following relation holds, (a(xt, t) · ∇xt)ψ(xt, t) = a(xt, t) ∇xt ψ(xt, t), (47) where is the element-wise product or operation. If a(xt, t) is also separable in each dimension4 the Taylor series is formally rewritten as follows, ψ(xt, t) = ψ(x0, 0) + ∞∑ k=1 tk k! ( 1∂t + a(xt, t) ∂xt )k ψ(xt, t) ∣∣∣∣ t=0 (48) where ∂xt := ∇xt is the element-wise derivative operator. This is formally the same as the 1-dim Taylor series. Therefore, it is sufficient to consider the 1-dim Taylor series first, and parallelize each dimension later. Thus the derivatives we actually need are the following two. ∂xtSθ(xt, t) = ∇xt Sθ(xt, t), ∂tSθ(xt, t) = (1∂t) Sθ(xt, t). (49) B.1.1 SPATIAL DERIVATIVE ∂xtSθ(xt, t) := ∇xt Sθ(xt, t) Let us first compute the spatial derivative of the conditional score function. (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = (∑ i ai∂xti ) xt − √ 1− νtx0√ νt 4In general, (a · ∇)2 = ( ∑ i ai∂i) 2 = ( ∑ i ai∂i)( ∑ j aj∂j) = ∑ i ai ∑ j(∂iaj + aj∂i∂j). If a is separable in each dimension, the ∂iaj(i 6= j) terms vanish, and (a · ∇)2 = ∑ i(ai∂iai + ∑ j aiaj∂i∂j). If the function ψ(xt, t) is separable in each dimension, then (a · ∇)2ψk = ∑ i(ai∂iai + ∑ j aiaj∂i∂j)ψk = (ak∂kak + a 2 k∂ 2 k)ψk. Thus we can formally write (a · ∇)2ψ = (a ∇ a + a a ∇ ∇) ψ = a (∇ a+ a ∇ ∇) ψ = a ∇ (a ∇) ψ = (a ∇ )2ψ = (a ∂x)2ψ. (Note that the operator (a · ∇) is scalar while (a ∂x) is d-dim vector.) We can similarly show (a · ∇)kψ = (a ∂x)kψ for k ≥ 3. = 1√ νt (∑ i ai∂xti ) (xt − √ 1− νtx0)1 ...(∑ i ai∂xti ) (xt − √ 1− νtx0)d = 1√ νt (∑ i ai∂xti ) (xt 1 −√1− νtx01) ...(∑ i ai∂xti ) (xt d −√1− νtx0d) = 1√ νt ( a1∂xt1 ) (xt 1 −√1− νtx01) ...( ad∂xtd ) (xt d −√1− νtx0d) = 1√ νt a1... ad = 1√ νt a = a 1√ νt 1. (50) Here, we used the notation xti to denotes the i-th component of a vector xt. Note that up to this point in the discussion, there have been no approximations, but strict ones. Now let us consider the approximation. Because of the single point approximation, we may assume that the derivative of the integrated score function will also be approximated by the derivative of the conditional score function, i.e., (a · ∇xt)(− √ νt∇xt log p(xt, t)) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)). (51) As the neural network Sθ(xt, t) is trained so that it approximates the integrated score function, we can also assume the following relation, (a · ∇xt)Sθ(xt, t) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = 1√ νt a. (52) Thus we have obtained the ideal spatial derivative of the neural network. We can also formally write the spatial derivative as follows using the above notation, a (∂xtSθ(xt, t)) = a 1√ νt 1. (53) We can also write it as ∂xtSθ(xt, t) = 1√ νt 1. (54) B.1.2 TIME DERIVATIVE −∂tSθ(xt, t) Next, let us compute −∂t(− √ νt∇xt log p(xt, t | x0, 0)). During the computation, x0 is replaced by the relation x0 = 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) . (55) We also use the following relations between νt, βt, which is immediately obtained from the definition of νt, ν̇t = (1− νt)βt. (56) Using the above information, we may compute the temporal derivative of the conditional score function as follows. − ∂t(− √ νt∇xt log p(xt, t | x0, 0)) = −∂t xt − √ 1− νtx0√ νt = − 1√ νt ( 1 2 ν̇t(1− νt)−1/2x0 ) − (xt − √ 1− νtx0) ( −1 2 ν̇tν −3/2 t ) = − ν̇t 2ν 3/2 t ( νt√ 1− νt x0 − (xt − √ 1− νtx0) ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt x0 ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) ) = − ν̇t 2ν 3/2 t (( −1 + 1 1− νt ) xt + 1 1− νt (νt∇xt log p(xt, t | x0, 0)) ) = − 1 2ν 3/2 t ν̇t 1− νt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − 1 2ν 3/2 t βt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) . (57) (Note that this calculation is exact, and no approximation is injected.) Because of the single point approximation, we may assume −∂t(− √ νt∇xt log p(xt, t)) ≈ −∂t(− √ νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) ≈ − βt 2 √ νt (xt +∇xt log p(xt, t)) , (58) and therefore, we can also assume that the temporal derivative of the neural network is approximated as −∂tSθ(xt, t) ≈ − βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) . (59) The “derivatives" have some good points. For example, the partial derivatives commute, ∂xt∂tSθ(xt, t) = ∂t∂xtSθ(xt, t). (60) B.2 COMPARISON OF THE EMPIRICAL SCORE DERIVATIVES AND IDEAL DERIVATIVES Let us empirically validate that idela approximation using real data similarly as above. However, since the equations will become very complicated if we evaluate the exact empirical score derivatives, we instead used finite differences as the ground truths. That is, let S(x, t) be the routine that computes the empirical score function as follows, S(x, t) = − √ νt |D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)], (61) and we evaluated the empirical score derivatives by the finite differences as follows5, Empirical t Deriv: ∂tS ≈ S(xt, t+ ε)− S(xt, t) ε (62) Empirical xt Deriv: (a · ∇xt)S ≈ S(xt + εa, t)− S(xt, t) ε , where a ∼ N (0, I). (63) where ε should be a sufficiently small value, and we used ε = 10−3 here. We compared these empirical derivatives with the ideal derivatives using MNIST and CIFAR-10. Ideal t Deriv: ∂tSθ = βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) = βt 2 √ νt ( xt − xt − √ 1− νtx0 νt ) Ideal xt Deriv: (a · ∇xt)Sθ = 1√ νt a As the ideal derivatives require the specific function forms of diffusion and variance schedules, we tested on following two noise schedules. Linear schedule We first tested on the linear schedule eq. (76), where β0 = 0.1 and β1 = 9.95. This is the same schedule as the one used in the main text. Figure 9 shows the relativeL2 error and the cosine similarity between the ideal t derivative eq. (21) and the empirical t derivative eq. (62), in which it is observed that they are very close when 0 / t / 0.5, while the approximation accuracy decreases as t increases. However, even in that case, there tends to be an overall positive correlation. It can also be observed that there is an error that seems to originate from the singularity of time origin when t ≈ 0. (See also § D.2.) For the x derivative (Figure 9), on the other hand, we can confirm that the errors between the ideal x derivative eq. (21) and empirical x derivative eq. (62) are generally very highly correlated, except around t ≈ 0.5. Modified tanh schedule We also tested on another noise schedule, the modified tanh schedule eq. (79) which does not have the singularity at the time origin. The parameters A, k were determined so that ν0 = 0.001 and ν1 = 0.999. Figure 11 and Figure 12 show the results. In this case, the overall trend is similar to the linear schedule, but we can observe that the singularity of the time origin of the t derivative is eliminated. 5To verify the empirical xt derivative, let us consider a simple case of three-variable function f(x, y, z). As its total derivative is df = ∂xfdx + ∂yfdy + ∂zfdz, we have f(x + a, y + b, z + c) − f(x, y, z) = (a∂x + b∂y + c∂z)f(x, y, z) for small a, b, c. Let a = εa′, b = εb′ and c = εc′, then f(x + εa′, y + εb′, z + εc′)− f(x, y, z) = ε(a′∂x + b′∂y + c′∂z)f(x, y, z). Therefore, we can write the spatial derivative as (a′∂x + b ′∂y + c ′∂z)f(x, y, z) = limε→0 1 ε (f(x+ εa′, y + εb′, z + εc′)− f(x, y, z)). B.3 THE DERIVATIVES L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) The computation of the derivative L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) does not require any particular nontrivial process. All we have to do is rewrite a term every time we encounter a derivative of Sθ(xt, t) or νt, and the rest is at the level of elementary exercises in introductory calculus. To execute this symbolic computation, the use of computer algebra systems will be a good option. It should be noted, however, that some implementation tricks to process such custom derivatives are required (in other words, the term-rewriting system should be customized). The results are shown below. Although these expressions appear complex at first glance, the code generation system can automatically generate code for such expressions. L[(−f̄[)(xt, t) = ( β2t 4 − β̇t 2 ) xt + ( β̇t 2 √ νt − β 2 t 4ν 3/2 t ) Sθ(xt, t) (64) L](−f̄])(xt, t) = ( β2t 4 − β̇t 2 ) xt + β̇t√ νt Sθ(xt, t) (65) G](−f̄])(xt, t) = ( 1 2 − 1 νt ) β 3/2 t (66) L]g(t) = − β̇t 2 √ βt (67) G]g(t) = 0. (68) We may also compute higher order derivatives, though we do not use them in this paper except L[L[(−f̄[), L[L[(−f̄[)(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt + ( β3t (−ν2t + 3νt − 3) 8ν 5/2 t + 3βtβ̇t 4ν 3/2 t − β̈t 2 √ νt ) Sθ(xt, t) (69) L]L](−f̄])(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt − β3t + 4β̈t 4 √ νt Sθ(xt, t) L]G](−f̄])(xt, t) = √ βt ν2t ( νt(2β 2 t + 3β̇t) 2 − β2t − 3ν2t β̇t 4 ) G]L](−f̄])(xt, t) = √ βt ( β2t 4 − β̇t 2 + β̇t νt ) G]G](−f̄])(xt, t) = 0 L]L]g(t) = 2βtβ̈t − β̇2t 4β 3/2 t L]G]g(t) = 0 G]L]g(t) = 0 G]G]g(t) = 0. As we can see, no factors other than integers, xt, Sθ(xt, t), νt, βt and derivatives of βt appear. This is also true for higher order derivatives, which can be easily shown. SymPy Code Snippet for Automatic Symbolic Computation of Derivatives The following code snippet is a minimalistic example of SymPy code to compute the above derivatives using the customized derivative method. We used SymPy 1.11 to test the following code snippet. from sympy import Function, symbols, sqrt, simplify x, t = symbols(’x t’) # x, t B = Function(’beta’) # βt # define customized derivatives of νt class nu(Function): def fdiff(self, argindex=1): t, = self.args return (1 - nu(t)) * B(t) # ν̇t = (1− νt)βt # define customized derivatives of Sθ(x, t) class S_theta(Function): def fdiff(self, argindex=1): x, t = self.args if argindex == 1: # ∂/∂x d = 1 / sqrt(nu(t)) elif argindex == 2: # ∂/∂t d = (x - S_theta(x, t)/sqrt(nu(t))) * B(t) / (2 * sqrt(nu(t))) return d # define f̄[ class f_flat(Function): @classmethod def eval(cls, x, t): return - B(t) * x / 2 + S_theta(x, t) * B(t) / (2 * sqrt(nu(t))) # define differential operator L[ class L_flat(Function): @classmethod def eval(cls, fxt): return -fxt.diff(t) - f_flat(x, t) * fxt.diff(x) # show each derivative print(f_flat(x, t)) print(simplify(L_flat(f_flat(x,t)))) # L[ f̄[(xt, t); see eq. (64) print(simplify(L_flat(L_flat(f_flat(x,t))))) # L[L[ f̄[(xt, t); see eq. (69), # we can similarly define f̄], L], G] and compute other derivatives. The result will look like [Out 1] − xβ(t) 2 + Sθ(x, t)β(t) 2 √ ν(t) [Out 2] − xβ 2(t) 4 + x ddtβ(t) 2 + Sθ(x, t)β 2(t) 4ν 3 2 (t) − Sθ(x, t) d dtβ(t) 2 √ ν(t) [Out 3] − xβ 3(t) 8 + 3xβ(t) ddtβ(t) 4 − x d2 dt2 β(t) 2 + Sθ(x, t)β 3(t) 8 √ ν(t) − 3Sθ(x, t)β 3(t) 8ν 3 2 (t) + 3Sθ(x, t)β 3(t) 8ν 5 2 (t) − 3Sθ(x, t)β(t) d dtβ(t) 4ν 3 2 (t) + Sθ(x, t) d2 dt2 β(t) 2 √ ν(t) and so on. Some additional coding techniques can further improve the readability of these expressions, but there will be no need to go any deeper into such subsidiary issues here. Thus obtained symbolic expressions can be automatically converted into executable code in practical programming languages including Python and C++ using a code generator, though the authors hand-coded the obtained expressions in Python for the experiments in this paper. C TRUNCATED DDIM IS EQUIVALENT TO THE QUASI-TAYLOR SAMPLER Using SymPy, we can easily compute the Taylor expansion of a given function. For example, the following code sympy.series(B(t+h), h, 0, 4) yields the result like β(t) + h d dξ1 β(ξ1) ∣∣∣∣ ξ1=t + h2 d 2 dξ21 β(ξ1) ∣∣∣ ξ1=t 2 + h3 d 3 dξ31 β(ξ1) ∣∣∣ ξ1=t 6 +O ( h4 ) . Similarly, using the relation ν̇t = (1− νt)βt, we can easily compute the Taylor expansion of νt−h as follows. sympy.series(nu(t-h), h, 0, 3) νt−h = ν(t)+h (β(t)ν(t)− β(t))+h2 β2(t)ν(t) 2 − β 2(t) 2 − ν(t) ddξ1 β(ξ1) ∣∣∣ ξ1=t 2 + d dξ1 β(ξ1) ∣∣∣ ξ1=t 2 +O (h3) Using this functionality of SymPy, we can easily compute the Taylor expansion of the DDIM (Song et al., 2020a). Let us recall that the DDIM algorithm is given by eq. (15), and using our notation α = √ 1− ν and σ = √ν, it can be written as follows, DDIM: xt−h ← √ 1− νt−h 1− νt︸ ︷︷ ︸ =:ρDDIMt,h xt + (√ νt−h − √ 1− νt−h 1− νt νt ) ︸ ︷︷ ︸ =:µDDIMt,h Sθ(xt, t). Then using SymPy, the Taylor expansion of ρDDIMt,h and µ DDIM t,h are computed as follows, ρDDIMt,h = 1 + βt 2 h− h 2 4 ( β2t 2 − β̇t ) + h3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + o(h3), (70) √ νtµ DDIM t,h = − βt 2 h+ h2 4 ( β̇t − β2t 2νt ) + h3 4 ( −β 3 t 12 + β3t 4νt − β 3 t 4ν2t + βtβ̇t 2νt − β̈t 3 ) + o(h3). (71) Although it has been known that DDIM corresponds to the Euler method up to 1st order terms (Song et al., 2020a; Salimans & Ho, 2022), this expansion gives better understanding of higher order terms. That is, these are exactly equivalent to our deterministic Quasi-Taylor sampler eq. (23) and eq. (24) up to 3rd-order terms. This fact may suggest that the assumptions behind the DDIM derivation will be logically equivalent to our assumptions of ideal derivatives. The advantage of the proposed Quasi-Taylor method is that we can decide the hyperparameter at which order the Taylor expansion is truncated. On the other hand, DDIM automatically incorporates terms of much higher order, leaving no room for order tuning. D ON THE NOISE SCHEDULE D.1 BACKGROUND: PICARD-LINDELÖF THEOREM Let us consider a 1-dim deterministic system ẋ(t) = a(x(t), t). It is well known that this ODE has a unique solution if a(x, t) is Lipschitz continuous w.r.t. x and continuous w.r.t. t (Picard-Lindelöf Theorem). Otherwise, ODEs often behave less favorably. (Similar Lipschitz conditions are also required for SDEs.) Example 1. For example, the ODE ẋ = x2, x(0) = 1 has the solution x = 1/(1− t) when t < 1, and it blows up at t = 1. It is usually impossible to consider what happens after t > 1 in ordinary contexts. Example 2. Another well-known example is ẋ = √ x, x(0) = 0. It has a solution x = t2/4, but x ≡ 0 is also a solution. It actually has infinitely many solutions x = 0 (if t ≤ t0), x = (t− t0)2/4 (if t > t0), where t0 ≥ 0 is an arbitrary constant. Example 3. Let us consider the following ODE ẋ = − t− 1 1− e−(t−1)2 x, x(0) = 1, (72) which is a simplified model of the Linear schedule eq. (76). The exact solution is as follows, x = √ e− 1√ e(t−1)2 − 1 , (73) which diverges at t = 1. In this case, a(x, t) = −x·(t−1)/(1−e−(t−1)2) is not Lipschitz continuous, as the Taylor expansion of the denominator is 1− e−(t−1)2 = (t− 1)2 +O((t− 1)4), and a(x, t) is approximately −x/(t− 1) near t = 1. In these cases, the coefficient a(·, ·) is not Lipschitz continuous. Even these seemingly simplest ODEs behave very complexly unless the coefficients are carefully designed. In PF-ODE, the Lipschitz condition is written as follows, Lip(f̄[) = ∣∣∣∣∂xt (βt2 xt − βt2√νtSθ(xt, t) )∣∣∣∣ <∞. (74) Using the ideal derivative of Sθ(xt, t), this condition translates as Lip(f̄[) = |βt(1− 1/νt)| = ∣∣∣∣ ν̇tνt ∣∣∣∣ <∞. (75) D.2 SPECIFIC SCHEDULES Including this point, the necessary conditions for a variance schedule νt will be summarized as follows. 1. ν0 ≈ 0 so that the initial density p(x0, 0) is close to the true data density. 2. νT ≈ 1 so that the terminal density p(xT , T ) is close to the Gaussian. 3. Sufficiently smooth so that βt = − ddt log(1− νt) is well defined. • In addition, βt should also be smooth so that the Taylor schemes can be used. 4. Monotonic (s < t =⇒ νs ≤ νt) to make βt non-negative. 5. Preferably, make the drift coefficient f̄[ Lipschitz continuous so that PF-ODE has a unique solution, i.e., Lip(f̄[) ≈ |ν̇t/νt| <∞. The following two scheduling functions which are common in diffusion generative models satisfy the conditions 1, 2, 4 above (the linear schedule also satisfies the 3rd condition), Linear: νt = 1− e−β0t−β1t 2 , βt = β0 + 2β1t, (76) Cosine: νt = 1− C cos2 ( π 2 t/T + ς 1 + ς ) , βt = { π T tan ( π 2 t/T+ς 1+ς ) if 0 ≤ t ≤ T ′ Θ if T ′ < t ≤ T . (77) where ς > 0 is a small constant, C = 1/ cos2(πς/2(1 + ς)) is a constant to make ν0 = 0, and the threshold constant is Θ = βT ′ . However, these common schedules do not satisfy the 5th condition that the drift coefficient f̄[ is Lipschitz continuous. Indeed, it is easily verified that limt→0 ν̇t/νt =∞ in both cases, since ν0 = 0 but ν̇0 > 0. Nevertheless, t = 0 is the only singular point, and since no function value or derivative at t = 0 is evaluated by numerical methods (except by the Runge-Kutta method), this point can practically be ignored. Note that, we can also consider some other schedule functions such as the sigmoid function and the hyperbolic tangent, which satisfy the condition 2, 3, 4, 5 but do not satisfy the 1st condition rigorously (but if ν0 is less than or equal to the level of the quantization error in the data, we may consider the first condition to be essentially satisfied), Sigmoid: νt = 1 1 + e−A(t−k) , βt = Aνt, (78) Modified Tanh: νt = tanh2(λ(t)/2), βt = λ̇(t) tanh(λ(t)/2), (79) where the parameter function λ(t) has some options, such as λ(t) = log(1 + Aekt), and A > 0, k > 0 are hyperparameters. D.3 HOW TO AVOID THE TIME ORIGIN SINGULARITY IN THE RUNGE-KUTTA METHODS When using the Heun and Classical RK4 methods, the function f̄[(xt, t) is evaluated at time t = 0. However, since the function f̄[(xt, t) contains the term proportional to 1/ √ νt, it will diverge at time t = 0 if the linear eq. (76) or cosine schedule eq. (77) is used. The simplest way to avoid this is to replace the function f̄[(x0, 0) with f̄[(xε, ε) where ε > 0 is a sufficiently small constant, only when the need to evaluate the function at time t = 0 arises. The same thing could happen at t = T if the cosine schedule and DDIM were used simultaneously, but this can be handled in the same way. If we use the sigmoid eq. (78) or modified tanh schedules, eq. (79) these problems do not occur unless the hyperparameters A and k are chosen to be very extreme values. E SUPPLEMENT ON FUNDAMENTALS For convenience, let us summarize some basics behind the ideas in this paper. The contents of this section are not particularly novel, but the authors expect that this section will give a better understanding of the ideas of this paper and the continuous-time approach to diffusion generative models. E.1 TAYLOR EXPANSION AND ITÔ-TAYLOR EXPANSION E.1.1 TAYLOR EXPANSION OF DETERMINISTIC SYSTEMS 1-dimensional case Let us first consider a 1-dim deterministic system ẋ(t) = a(x(t), t), where a(·, ·) is sufficiently smooth, and let us derive the Taylor series expression of the solution of this ODE. Let ϕ(x(t), t) be a differentiable function. Its total derivative is written as dϕ = ∂ϕ ∂t dt+ ∂ϕ ∂x dx = ∂ϕ ∂t dt+ ∂ϕ ∂x dx dt dt = ( ∂ϕ ∂t + ∂ϕ ∂x a(x, t) ) dt = ( ∂ ∂t + a(x, t) ∂ ∂x ) ︸ ︷︷ ︸ =:L[ ϕdt. (80) By integrating both sides from 0 to t, we have ϕ(x(t), t) = ϕ(x(0), 0) + ∫ t 0 (L[ϕ)(x(s), s)ds. (81) We use this formula recursively to obtain the Taylor series of the above system. Let ϕ(x(t), t) = x(t), then we have x(t) = x(0) + ∫ t 0 (L[x)(x(s), s)ds = x(0) + ∫ t 0 a(x(s), s)ds. (82) Let ϕ(x(t), t) = a(x(t), t), then we have a(x(t), t) = a(x(0), 0) + ∫ t 0 (L[a)(x(s), s)ds. (83) Using the above two
1. What is the focus of the paper regarding Probability Flow ODE for diffusion models? 2. What are the strengths and weaknesses of the proposed novel solver? 3. Do you have any concerns about the "ideal derivatives" replacement? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the paper's assumptions, empirical results, or theory?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a novel solver of the Probability Flow ODE for diffusion models introduced in [1]. By taking a Taylor expansion of the ODE and including higher-order terms, the solver can take larger steps. This speeds up sampling, which is a well-known computational bottleneck in diffusion models. However, the higher order terms in the Taylor expansion are themselves expensive to compute. Thus the authors propose substituting these terms with "ideal derivatives" which involve [1] Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole. Score-based generative modeling through stochastic differential equations. 2021 Strengths And Weaknesses Strengths: The work tackles a relevant problem in current score-based diffusion models: slow data generation. Weaknesses: The empirical results in the main Figure 2 are not very compelling, compared to, e.g., [2, 3, 4]. There is limited discussion on why the "ideal derivatives" are a suitable replacement for the true derivatives. Arguments (in the appendix) are mainly intuitive, with no proofs. The extra theory is dense, and it is unclear whether it is worth the empirical improvements. [2] Salimans, T. and Ho, J., 2022. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512. [3] Song, J., Meng, C. and Ermon, S., 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502. [4] Kong, Z. and Ping, W., 2021. On fast sampling of diffusion probabilistic models. arXiv preprint arXiv:2106.00132. Clarity, Quality, Novelty And Reproducibility Writing is unclear: For example, "[DDIM] is not necessarily derived directly from PF-ODEs, and its relationship to PF-ODE was revealed through a little argumentation" "Nevertheless, in diffusion models, the derivatives are expected to have good structure, and are effectively evaluated." "The Jacobian matrix is diagonal assuming that each dimension is independent of each other." <--- Is this really a valid assumption? "To date, this approximation has often been understood as a "tractable surrogate". <--- What does this mean? Citations? "50,000 images were generated for each condition to compute the FID scores." Why 50,000? The standard is 10,000 [5]. Correctness and Novelty: The proposed solver hinges on the use of "ideal derivatives", which replace the true higher order terms in the Taylor expansion of the ODE. While the idea of applying an idealized derivative substitution to a Taylor expansion of the diffusion ODE is novel, its correctness is unclear. Moreover, the general derivation of the solver requires assuming that the diffusion in each data dimension is independent. This seems like a very strong (and unrealistic) assumption to me. [5] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. and Hochreiter, S., 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
ICLR
Title Quasi-Taylor Samplers for Diffusion Generative Models based on Ideal Derivatives Abstract Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed. This paper proposes new efficient samplers based on the numerical schemes derived by the familiar Taylor expansion, which directly solves the ODE/SDE of interest. In general, it is not easy to compute the derivatives that are required in higher-order Taylor schemes, but in the case of diffusion models, this difficulty is alleviated by the trick that the authors call “ideal derivative substitution,” in which the higher-order derivatives are replaced by tractable ones. To derive ideal derivatives, the authors argue the “single point approximation,” in which the true score function is approximated by a conditional one, holds in many cases, and considered the derivatives of this approximation. Applying thus obtained new quasi-Taylor samplers to image generation tasks, the authors experimentally confirmed that the proposed samplers could synthesize plausible images in small number of NFEs, and that the performance was better or at the same level as DDIM and Runge-Kutta methods. The paper also argues the relevance of the proposed samplers to the existing ones mentioned above. 1 INTRODUCTION Generative modeling based on deep neural networks is an important research subject for both fundamental and applied purposes, and has been a major trend in machine learning studies for several years. To date, various types of neural generative models have been studied including GANs (Goodfellow et al., 2014), VAEs (Kingma et al., 2021; Kingma & Welling, 2019), normalizing flows (Rezende & Mohamed, 2015), and autoregressive models (van den Oord et al., 2016b;a). In addition to these popular models, a class of novel generative models based on the idea of iteratively refinement using the diffusion process has been rapidly gaining attention recently as a challenger that rivals the classics above (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Song & Ermon, 2020; Ho et al., 2020; Dhariwal & Nichol, 2021). The diffusion-based generative models have recently been showing impressive results in many fields including image (Ho et al., 2020; Vahdat et al., 2021; Saharia et al., 2021; Ho et al., 2021; Sasaki et al., 2021), video (Ho et al., 2022), text-to-image (Nichol et al., 2021; Ramesh et al., 2022), speech (Chen et al., 2020; 2021; Kong et al., 2021; Popov et al., 2021; Kameoka et al., 2020), symbolic music (Mittal et al., 2021), natural language (Hoogeboom et al., 2021; Austin et al., 2021), chemoinformatics (Xu et al., 2022), etc. However, while the diffusion models have good synthesis quality, it has been said that they have a fatal drawback that they often require a very large number of iterations (refinement steps) during synthesis, ranging from hundreds to a thousand. In particular, the increase in refinement steps critically reduces the synthesis speed, as each step involves at least one neural function evaluation (NFE). Therefore, it has been a common research question how to establish a systematic method to stably generate good data from diffusion models in a relatively small number of refinement steps, or NFEs in particular. From this motivation, there have already been some studies aiming at reducing the NFEs (See § 2). Among these, Probability Flow ODE (PF-ODE) (Song et al., 2020b) enable efficient and deterministic sampling, and is gaining attention. This framework has the merit of deriving a simple ODE by a straightforward conceptual manipulation of diffusion process. However, the ODE is eventually solved by using a black-box Runge-Kutta solver in the original paper, which requires several NFEs per step and is clearly costly. Another PF-ODE solver includes DDIM (Song et al., 2020a), and is also commonly used. It is certainly efficient and can generate plausible images. However, it was not originally formulated as a PF-ODE solver, and the relationship between DDIM and PF-ODE is not straightforward. From these motivations, we provide another sampler to solve the same ODE, which performs better than or on par with DDIM. The derivation outline is simple and intuitive: (1) consider the Taylor expansion of the given system, and (2) replace the derivatives in the Taylor series with appropriate functions; that’s all. The contribution of this paper would be as follows: (1) We propose novel samplers for diffusion models based on Taylor expansion of PF-ODE. They outperformed, or were on par with RungeKutta methods. (2) To derive our algorithms, we show that the derivatives of score function can be approximated by simple functions. We call this technique the ideal derivative substitution. (3) It has been known that the 1st order term of DDIM is same as the Euler method for PF-ODE. This paper gives further explanation for higher order terms of DDIM: we show that the proposed Quasi-Taylor method and DDIM are identical at least up to 3rd order terms. (4) The same idea can be naturally extended to derive a stochastic solver for a reverse-time SDE, which we call R-SDE in this paper. 2 BACKGROUND AND RELATED WORK Diffusion Process to draw a new data from a target density: Let us first briefly summarize the framework of the diffusion-based generative models. Following Song et al. (2020b), we describe the mechanisms using the language of continuous-time diffusion process for later convenience. Let us consider “particles” {xt} moving in a d-dim space obeying the following Itô diffusion, SDE: dxt = f(xt, t)dt+ g(xt, t)dBt, (1) where Bt is the d-dim Brownian motion whose temporal increments obeys the standard Gaussian. The drift f(·, ·) is d-dim vector, and the diffusion coefficient g(·, ·) is scalar. The SDE describes the microscopic dynamics of each particle. On the other hand, the “population” of the particles obeying the above SDE, i.e. density function p(xt, t | xs, s), (t > s), follows the following PDEs, which are known as Kolmogorov’s forward and backward equations (KFE and KBE); the former is also known as the Fokker-Planck equation (FPE), see § E.2, FPE: ∂tp(xt, t | xs, s) = −∇xt · f(xt, t)p(xt, t | xs, s) + ∆xt g(xt, t) 2 2 p(xt, t | xs, s), (2) KBE: −∂sp(xt, t | xs, s) = f(xs, s) · ∇xsp(xt, t | xs, s) + g(xs, s) 2 2 ∆xsp(xt, t | xs, s), (3) where ∆x := ∇x ·∇x is Laplacian. (FPE also holds for p(xt, t); consider the expectation Ep(xs,s)[·].) These PDEs enables us to understand the macroscopic behavior of the particle ensemble. For example, if f(x, t) = −∇U(x), g(x, t) = √ 2D, where U(x) a certain potential and D a constant, then we may verify that the stationary solution of FPE is p(x) ∝ e−U(x)/D. It means that we may draw a sample x that follows the stationary density by evolving the SDE over time. This technique is often referred to as the Langevin Monte Carlo method (Rossky et al., 1978; Roberts & Tweedie, 1996). Some of the diffusion generative models are based on this framework, e.g. (Song & Ermon, 2019; 2020), in which the potential gradient∇U(x) is approximated by a neural network. Another systematic approach is considering the reverse-time dynamics (Song et al., 2020b). An approach is based on KBE eq. (3). Roughly speaking, FPE gives information about the future from the initial density, while KBE gives information about what the past states were likely to be from the terminal density. Here, instead of using KBE directly, it is useful to consider a variant of it which is transformed into the form of FPE, because it has an associated SDE that enables the particle-wise backward sampling (Stratonovich, 1965; Anderson, 1982); see also § E.3.2, R-FPE: −∂sp(xs, s | xt, t) = ∇xs · f̄(xs, s)p(xs, s | xt, t) + ∆xs ḡ(xs, s) 2 2 p(xs, s | xt, t) (4) R-SDE: dxs = −f̄(xs, s)(−ds) + ḡ(xs, s)dB̄s. (5) Hereafter, let g(xt, t) = g(t) for simplicity. Then the specific forms of drift and diffusion coefficients are written as follows, R-SDE coeffs: f̄(xt, t) = f̄](xt, t) := f(xt, t)− g(t)2∇xt log p(xt, t), ḡ(t) = g(t). (6) Starting from a certain random variable xT , then by evolving the R-SDE reverse in time, we may obtain a x̂0 which follows p(x0, 0 | xT , T ) (i.e. the solution of R-FPE eq. (4)). Therefore, if the initial density p(x0, 0) of the forward dynamics eq. (2) is the true density, then we may utilize this mechanism as a generative model to draw a new sample x̂0 from it. Another approach is based on FPE eq. (2). By formally eliminating the diffusion term of the FPE for the forward process, we can derive another backward FPE (see also § E.3.1). Being diffusionfree, the backward FPE yields a deterministic ODE, which is called the Probability Flow ODE (PF-ODE) (Song et al., 2020b), and is an example of neural ODEs (Chen et al., 2018). The population density obtained by evolving this system is exactly the same as the above R-SDE. PF-ODE coeffs: f̄(xt, t) = f̄[(xt, t) := f(xt, t)− 1 2 g(t)2∇xt log p(xt, t). ḡ(t) = 0. (7) Some extensions of this framework include as follows. Dockhorn et al. (2021) introduced the velocity variable considering the Hamiltonian dynamics. Another extension is the introduction of a conditioning parameter, and guidance techniques using it (Dhariwal & Nichol, 2021; Ho & Salimans, 2021; Choi et al., 2021) to promote the dynamics to go to a specific class of images, which has achieved remarkable results in text-to-image tasks (Nichol et al., 2021; Ramesh et al., 2022). Variance-Preserving Model (VP-SDE Model): The solution of unconditioned FPE is written as the convolution with the initial density p(x0, 0) and the fundamental solution, or the heat kernel, p(xt, t | x0, 0), which is the solution of the conditional FPE under the assumption that the initial density was delta function, p(x0, 0) = δ(x0−x∗0). Although it is still intractable to solve this problem in general, a well-known exception is the (time-dependent) Ornstein-Uhlenbeck (OU) process where f(xt, t) = − 12βtxt and g(xt, t) = √ βt. βt = β(t) is a non-negative continuous function. The specific form of diffusion coefficient βt has some options: a simplest one would be the linear function, and another would be the cosine schedule proposed in (Nichol & Dhariwal, 2021); see also § D. In any cases, if it is the OU process, the heat kernel is simply written as follows, p(xt, t | x0, 0) = N (xt | √ 1− σ2t x0, σ2t I), where σ2t = 1− exp ( − ∫ t 0 βt′dt ′ ) . (8) Hereafter, we denote the noise variance by νt := σ2t . (In some literature, the signal level αt :=√ 1− σ2t is used as a basic parameter instead of the variance.) This model is referred to as the variance-preserving (VP) model by Song et al. (2020b). It has good properties such as the scale of data ‖xt‖2 is almost homogeneous, which is advantageous in neural models. However, the variance exploding (VE) model (Song et al., 2020b) in which the norm increases is also practicable, and the theory can be developed in a similar manner. Training Objective: In diffusion-based generative models, one estimates the score function ∇xt log p(xt, t) = ∇xt logEp(x0,0)[p(xt, t | x0, 0)] by a neural network Sθ(xt, t). This sort of learning has been referred to as the score matching (Hyvärinen & Dayan, 2005; Vincent, 2011). However, the exact evaluation of this training target is clearly intractable because of the expectation Ep(x0,0)[·], so it has been common to consider a Variational Bayesian surrogate loss; Ho & Salimans (2021) showed that the following loss function approximates the negative ELBO, L := E[‖−√νt∇xt log p(xt, t | x0, 0)− Sθ(xt, t)‖22] = E[‖xt− √ 1−νtx0√ νt − Sθ(xt, t)‖22] (9) = E[‖w − Sθ( √ 1− νtx0 + √ νtw, t)‖22], (10) where the expectation in eq. (10) is taken w.r.t. x0 ∼ D, w ∼ N (0, I), and t ∼ Uniform([0, T ]). Some variants of the score matching objectives are also studied. For example, Chen et al. (2020) reported that the L1 loss gave better results than the L2 loss in speech synthesis. Also, Kingma et al. (2021) argued that the weighted loss with SNR-based weights improves the performance. It should be noted that the above loss function will actually be very close to the ideal score matching loss function in practice, where the probability is not conditioned on x0, i.e., Lideal = E[‖− √ νt∇xt log p(xt, t)− Sθ(xt, t)‖22]. (11) This is because there almost always exists a point x0 on the data manifold such that∇xt log p(xt, t) ≈ ∇xt log p(xt, t | x0, 0) holds with very high accuracy in very high-dim cases, because of the wellknown “log-sum-exp ≈ max” law. For more details, see § 3.3 and § A. Sampling Schemes for R-SDE and PF-ODE: Thus obtained Sθ(xt, t) is expected to finely approximate −√νt∇xt log p(xt, t), and we may use it in eq. (5). One of the simplest numerical schemes for solving SDEs is the Euler-Maruyama method (Maruyama, 1955, Theorem. 1) as follows, and many diffusion generative models are actually using it. Euler-Maruyama: xt−h ← xt − hf̄](xt, t) + √ hg(t)w, where w ∼ N (0, I) (12) where h > 0 is the step size. The error of the Euler-Maruyama method is the order of O( √ h) in general, though it is actually O(h) in our case; this is because ∇xtg(t) = 0. As a better solver for the R-SDE, the Predictor-Corrector (PC)-based sampler was proposed in (Song et al., 2020b). The PC sampler outperformed the Predictor-only strategy, but it requires many NFEs in the correction process, so we will exclude it in our discussion. Another R-SDE solver is the one proposed by Jolicoeur-Martineau et al. (2021), whose NFE per refinement step is 2. On the other hand, there are also deterministic samplers for PF-ODE eqs. (5), (7) as follows, Euler: xt−h ← xt − hf̄[(xt, t) (13) Runge-Kutta: xt−h ← xt − h ∑m i=1 biki, where ki = f̄[(xt − h ∑i−1 j=1 aijkj , t− hci) (14) where {aij}, {bi}, {ci} are coefficients of the Runge-Kutta (RK) method (see § E.5). The error of the Euler method is O(h), and that of the RK method is O(hp), p ≤ m in general (Press et al., 2007, § 16). Another deterministic sampler is DDIM (Song et al., 2020a, Eq. (13)), and is also understood as a PF-ODE solver (Salimans & Ho, 2022). Its NFE per step is only 1, and is capable of efficiently generate samples. DDIM: xt−h ← αt−hαt xt + ( σt−h − αt−hαt σt ) Sθ(xt, t). (15) In addition, as a concurrent work as ours, Lu et al. (2022) proposed the DPM-solver, which is based on the Taylor expansion of PF-ODE. However, as the gradient is evaluated using several different points, the NFE per step is greater than 1 in general. Liu et al. (2022) proposed a sampler based on the linear multi-step method, in which the NFE/step is reduced to 1 except initial 3 steps. Another PF-ODE solver is the DEIS (Zhang & Chen, 2022) which is based on the exponential integrator with some non-trivial approximations such as the polynomial interpolation of score function. Other techniques that aimed to make sampling faster include as follows. Song & Ermon (2020) proposed a variety of techniques to accelerate the sampling. Watson et al. (2021) proposed a DP-based optimization method to tune noise schedules for faster sampling. Luhman & Luhman (2021) and Salimans & Ho (2022) proposed distilling the pretrained teacher model to a student model that can predict teacher’s several steps in a single step, which is efficient during the sampling but extra training for distillation is required. Bao et al. (2022a;b) derived some analytic expressions of reverse dynamics to enable faster sampling. 3 PROPOSED METHOD: QUASI-TAYLOR SAMPLERS 3.1 MOTIVATION: HIGHER-ORDER STRAIGHTFORWARD SOLVERS FOR R-SDE AND PF-ODE As mentioned above, DDIM already exists as an efficient solver for PF-ODE, but it can only be considered a PF-ODE solver up to first-order terms (Song et al., 2020a; Salimans & Ho, 2022), and it would not be clear enough whether it can be considered a higher-order solver for PF-ODE. Some other techniques (Lu et al., 2022; Liu et al., 2022; Zhang & Chen, 2022) were designed as higher-order PF-ODE solvers, though their derivations are rather sophisticated and less simple. Since PF-ODE and R-SDE provide the basis for the diffusion generative models, it would be beneficial to develop samplers that directly solve them through intuitive and straightforward arguments. From these motivations, we propose a simple but efficient sampler based on the Taylor expansion, a very basic technique that is familiar to many researchers and practitioners. In general, Taylor methods are not very popular as numerical schemes because they require higher-order derivatives, which are not always tractable. However, in diffusion models, the derivatives are easily and effectively evaluated, albeit approximately. The validity of this approximation requires some consideration (see § A, § B), but once accepted, an efficient sampler can be derived simply by substituting this approximation formula into the Taylor series. This section describes the details of the idea, and derives solvers for both PF-ODE and R-SDE. Entire sampling procedures are summarized in § F. 3.2 TAYLOR SCHEME FOR ODE AND ITÔ-TAYLOR SCHEME FOR SDE Taylor Scheme for Deterministic Systems For simplicity, we consider the 1-dim case here, but we can easily generalized it to multidimensional cases. (See § E.1.1.) Given a ODE ẋt = a(xt, t), where the function a is sufficiently smooth, then we can consider the Taylor expansion of it, using a differential operator L[ := ( ∂t + a(t, xt)∂xt ) . We can write the Taylor expansion of the path xt as follows. Ignoring o(hp) terms of the series, we obtain a numerical scheme of order p. xt+h = xt + ha(xt, t) + h2 2! L[a(xt, t) + h3 3! L2[a(xt, t) + · · · . (16) Itô-Taylor Scheme for Stochastic Systems In stochastic systems, the Taylor expansion requires modifications because of the relation E[dB2t ] = dt. If xt obeys a stochastic system dxt = a(xt, t)dt+ b(xt, t)dBt, then the path is written in a stochastic version of Taylor-like series, which is often called the Itô-Taylor expansion, a.k.a. Wagner-Platen expansion (Platen & Wagner, 1982);(Kloeden et al., 1994, § 2.3.B);(Särkkä & Solin, 2019, § 8.2). The Itô-Taylor expansion is based on the following differential operators L], G], which are based on Itô’s formula (Itô, 1944). L] := ∂t + a(x, t)∂x + 1 2 b(x, t)2∂2x, G] := b(x, t)∂x (17) In (Kloeden & Platen, 1992), a number of higher order numerical schemes for SDEs based on the Itô-Taylor expansion are presented. One of the simplest of them is as follows. See also § E.1.2. Theorem 1 (Kloeden & Platen (1992, § 14.2): An Itô-Taylor scheme of weak order β = 2). Let xt obeys the above SDE, and let the differential operators L], G] be given by eq. (17). Then, the following numerical scheme weakly converges with the order of β = 2 (see § E.4). Furthermore, in a special case where G2]b ≡ 0, the strong γ = 1.5 convergence is also guaranteed (Kloeden & Platen, 1992, § 10.4). xt+h ← xt + ha+ w̃tb+ w̃2t − h 2 G]b+ h2 2 L]a+ (w̃th− z̃t)L]b+ z̃tG]a (18) where w̃t = √ hwt, z̃t = h √ hzt are correlated Gaussian random variables, and wt, zt are given by wt = u1 and zt = 12u1 + 1 2 √ 3 u2, where u1, u2 ∼ N (0, 1) (i.i.d.). The notations a, L]a, etc. are the abbreviations for a(xt, t), (L]a)(xt, t), etc. 3.3 SINGLE POINT APPROXIMATION OF THE SCORE FUNCTION Before proceeding, let us introduce the single point approximation of score function that ∇xt log p(xt, t) almost certainly has a some point x0 on the data manifold such that the following approximation holds, ∇xt log p(xt, t) = ∇xt log ∫ p(xt, t | x0, 0)p(x0, 0)dx0 ≈ ∇xt log p(xt, t | x0, 0). (19) To date, this approximation has often been understood as a tractable variational surrogate. However, the error between the integral and the single point approximation is actually very small in practical scenarios. More specifically, the following facts can be shown under some assumptions. 1. The relative L2 distance between ∇xt log p(xt, t) and ∇xt log p(xt, t | x0, 0) is bounded above by √ (1− νt)/νt for any point x0 on the “data manifold” in practical scenarios. 2. When the noise level is low νt ≈ 0, and the data space is sufficiently high-dimensional, the distant points far from xt do not contribute to the integral. If the data manifold is locally a k-dim subspace of the entire d-dim data space, where 1 k d, then the relative L2 distance is bounded above by around 2 √ k/d. Of course, the single point approximation is not always valid. In fact, the approximation tends to break down when the noise level νt is around 0.9 (SNR = (1− νt)/νt is around 0.1). In this region, the single point approximation can deviates from the true gradient by about 20% in some cases. Conversely, however, it would be also said that the error is as small as this level even in the worst empirical cases. For more details on this approximation, see § A. 3.4 IDEAL DERIVATIVE SUBSTITUTION In order to adopt the above Taylor schemes to our problem setting where the base SDE is eq. (5), and f̄], f̄[ are given by eqs. (6), (7), we need to consider the following differential operators. Note that the time evolves backward in time in our case, the temporal derivative should be −∂t, L[ = −∂t − ( f̄[(xt, t) · ∇xt ) , L] = −∂t − ( f̄](xt, t) · ∇xt ) + βt 2 ∆xt , G] = √ βt (1 · ∇xt) , where f̄[(xt, t) = − βt 2 xt + βt 2 √ νt Sθ(xt, t), f̄](xt, t) = − βt 2 xt + βt√ νt Sθ(xt, t). (20) It is not easy in general to evaluate expressions involving such many derivatives. Indeed, for example, L[(−f̄[) has the derivatives of the learned score function, viz. ∂tSθ(xt, t) and (• · ∇xt)Sθ(xt, t), which are costly to evaluate exactly, whether in approaches based on finite differences (as in (Lu et al., 2022)), back-propagation, or the JAX paradigm (Bradbury et al., 2018), because they eventually require extra evaluation of a deeply nested function other than Sθ(xt, t), and extra memory consumption. Fortunately, however, by using the trick which the authors call the “ideal derivative substitution", we may write all of the derivatives as a simple combination of known values, only consisting of xt,Sθ(xt, t), νt, βt and derivatives of βt, and no extra computation is needed. Since the score function has a single point approximation eq. (19) we may assume that the derivatives should ideally hold following equalities. For derivation, see § B.1. Conjecture 1 (Ideal Derivatives). Under assumptions in § A — i.e. the data space Rd is sufficiently high dimensional d 1, the data manifoldM⊂ Rd is also sufficiently high dimensional but much smaller than the entire space (1 dimM d),M is bounded,M is sufficiently smooth locally, and the variance parameter νt is close to 0 or 1; — then it is likely that the following approximations hold, where a ∈ Rd is an arbitrary vector. We call them the “ideal derivatives”. (a · ∇xt)Sθ(xt, t) = 1√ νt a, −∂tSθ(xt, t) = − βt 2 √ νt ( xt − Sθ(xt, t)√ νt ) . (21) To confirm the accuracy of this approximation, we compared empirical and ideal derivatives using MNIST (LeCun et al., 2010) and CIFAR10 (Krizhevsky, 2009). As a result, it was confirmed that the approximation of spatial derivative, i.e. (a · ∇), is usually very accurate; the cosine similarity between the empirical and ideal derivatives is nearly always > 0.99 (Figure 10). On the other hand, for the time derivative ∂t, it was confirmed that it is quite accurate when the time parameter t (and the variance νt) are small, but the error increases when the time parameter t (and the variance νt) become larger (Figure 9). See § B.2 for more details. 3.5 QUASI-TAYLOR AND QUASI-ITÔ-TAYLOR SCHEMES WITH IDEAL DERIVATIVES As we can see in § B.2, the ideal derivative approximation is sometimes very accurate while sometimes not. In any case, however, the error in the ideal derivative only affects the second or higher order terms of Taylor series, and it will not be the dominant error in the whole. As there is an overall correlation between the true and ideal derivatives, the advantages will outweigh the disadvantages on average, and we can regularly use this approximation on a speculative basis, even though there exist some cases where the approximation is not accurate. If we accept the ideal derivative approximation, we can formally compute the symbolic expressions for the derivatives L[(−f̄[), L](−f̄]), L](g), G](−f̄]) and G](g) that appear in the Taylor and ItôTaylor series by routine calculations, which can be easily automated by computer algebra systems such as SymPy (Meurer et al., 2017) as shown in § B.3. By substituting thus obtained symbolic expressions into the above Taylor series, we can derive Taylor schemes for both PF-ODE and R-SDE as follows. Algorithm 1 (Quasi-Taylor Sampler with Ideal Derivatives for PF-ODE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ [ t,hxt + µ [ t,hSθ(xt, t)/ √ νt,where (22) ρ[t,h = 1 + βth 2 + h2 4 ( β2t 2 − β̇t ) + h 3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + · · · , (23) µ[t,h = −βth2 + h 2 4 ( β̇t − β 2 t 2νt ) + h 3 4 ( β3t (−ν 2 t+3νt−3) 12ν2t + βtβ̇t2νt − β̈t 3 ) + · · · . (24) Using terms up to O(h2), the sampler will have 2nd-order convergence (henceforth referred to as Taylor 2nd), and using terms up to O(h3), the sampler will 3rd-order convergent (similarly, Taylor 3rd). If we use up to the O(h) terms, the algorithm is same as the Euler method. Algorithm 2 (Quasi-Itô-Taylor Sampler with Ideal Derivatives for R-SDE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ ] t,hxt + µ ] t,hSθ(xt, t)/ √ νt + n ] t,h,where (25) ρ]t,h = 1 + βt 2 h+ h2 4 ( β2t 2 − β̇t ) , µ]t,h = −βth+ β̇th 2 2 , (26) n]t,h = √ βt √ hwt + h 3/2 ( − β̇t 2 √ βt (wt − zt) + β 3/2 t (νt−2) 2νt zτ ) . (27) The Gaussian variables wt and zt have dimension-wise correlations, and each dimension is sampled similarly to Theorem 1. Computation Cost: At first glance, these algorithms may appear to be very complex. However, the computational complexity hardly increases compared to the Euler or Euler-Maruyama methods, because almost all of the computational cost is accounted for by the neural network Sθ(xt, t), and the costs for scalar values ρ•t,h, µ • t,h and noise generation n ] t,h are almost negligible. It should also be noted that these scalar values can be pre-computed and stored in the memory before synthesis. Thus the computational complexity of these methods are practically equal to Euler, Euler-Maruyama, and DDIM methods. Error from the Exact Solution of PF-ODE: The numerical error of the Quasi-Taylor method from the exact solution increases depending on the following factors: (1) The truncation error of the Taylor series in each step, i.e. O(hp+1), (2) The number of the steps i.e. O(1/h), (3) The training and generalization error of the score function, i.e. ≈ L, and (4) The average error between the true and ideal derivatives of the score function =: ‖δ‖. If the factors 3 and 4 could be zero, then the numerical error is the order of O(hp). Otherwise, the expected numerical error is roughly evaluated as follows, error = O ( h−1(hL+ h2(L+ ‖δ‖) + h3(L+ ‖δ‖) + · · ·+ hp+1) ) = O ( L+ h(L+ ‖δ‖) + h2(L+ ‖δ‖) + · · ·+ hp ) . (28) That is, the error of Euler method is O(L+ h), the Heun method (2nd order Runge-Kutta) will be O(L+hL+h2), and the Taylor-2nd method is O(L+h(L+‖δ‖)+h2). As long as L, ‖δ‖ > 0, the predominant O(h) term will not disappear. Therefore, the overall order of the error will not decrease even if we increase the order of Taylor series greater than p ≥ 3. Nevertheless, beyond such an order evaluation, specific coefficients in higher order terms can still affect the performance, which should be validated empirically. 4 IMAGE SYNTHESIS EXPERIMENT Experimental Configuration: In this section, we conduct experiments to verify the effectiveness of the methods developed in this paper. Specifically, we compare the performance of the Euler scheme eq. (13), Taylor 2nd & Taylor 3rd (Alg. 1), DDIM (Song et al., 2020a), and the Runge Kutta methods (Heun and RK4 § E.5; these are less efficient than others because of NFEs per step) for PF-ODE, as well as the Euler-Maruyama scheme eq. (12) and Itô-Taylor (Alg. 2) for R-SDE. The datasets we used were CIFAR-10 (32× 32) (Krizhevsky, 2009) and CelebA (64× 64) (Liu et al., 2015). The network structure was not novel but was based on an existing open source implementation; we used the “NCSN++” implemented in the official PyTorch code by Song et al. (2020b). The network consisted of 4 levels of resolution, with the feature dimension of each level being 128 → 128 → 256→ 256→ 256. Each level consisted of BigGAN-type ResBlocks, and the number of ResBlocks in each level was 8 (CIFAR-10) and 4 (CelebA). The loss function we used was the unweighted L2 loss similarly to (Ho et al., 2020). The optimizer was Adam (Kingma & Ba, 2014). The machine used for training was an in-house Linux server dedicated to medium-scale machine learning training with four GPUs (NVIDIA Tesla V100). The batch size was 256. The number of training steps was 0.1 M steps, and the training took about a day for each dataset. The noising schedule was also the same as the existing one, the default configuration of VP-SDE (Song et al., 2020b): βt = 0.1 + 19.9t and νt = 1− exp(−0.1t−9.95t2) eq. (76). The integration duration was T = 1, and the step size h was constant, i.e. h = T/N where N is the number of refinement steps. As a quality assessment metric, we used the Fréchet Inception Distance (FID) (Heusel et al., 2017). To evaluate FIDs, we used the pretrained Inception v3 checkpoint (Szegedy et al., 2016), and resized all images to 299× 299× 3 by bilinear interpolation before feeding them to the Inception network. For each condition, 10,000 images were randomly generated to compute the FID score. Note that in this experiment, the computational resources for training were limited, and training was stopped before it fully converged (only 0.1 M steps, while in some other papers the number of training steps was e.g. 1.3 M steps in (Song et al., 2020b)). Therefore, it would be necessary to observe relative comparisons between samplers rather than directly comparing these FID value to those presented in other papers. Results: Figure 1 and Figure 2 show random samples for each sampler. More examples are available in § G. The deterministic samplers considered in this paper generated plausible images much faster than the vanilla Euler-Maruyama sampler. Figure 3a and Figure 3b reports the FID scores. From these figures, the following observations can be made. First, the proposed Quasi-Taylor methods have about the same or slightly better than DDIM. The reason for this is discussed in the next section § 5. We also found that the Runge-Kutta methods reduces FID in fewer steps overall. However, they also hit bottom faster. This may be due to the effect of the singularity at the time origin (see § D) in the final step. (This can be seen in Figure 16. In the second right column, the Runge-Kutta methods produce images similar to the other deterministic samplers, but the rightmost ones seem to be slightly noisier than the others). Even though the ideal derivatives are only approximations and contain some errors, the convergence destinations of Quasi-Taylor methods were almost the same as the Runge-Kutta methods. This suggests that the error in the ideal derivatives is actually hardly a problem, because in regions where the approximation error is large, the state xt is noisy to begin with (e.g. left 2/3 figures in Figure 16), and the approximation error is negligible compared to the noise that was originally there. The proposed stochastic sampler (Itô-Taylor) also showed sufficiently competitive results, in terms of both FID scores and visual impression. Comparison of the figures in § G (e.g. Figure 21) confirms that the Itô-Taylor method empirically reaches almost the same target as Euler-Maruyama method much more accurately, and it could be expected to be a safe alternative to Euler-Maruyama method when stochastic sampling is important. 5 DISCUSSION: RELATIONSHIP WITH DDIM In the above experiment, the performance of the proposed Quasi-Taylor methods are found to be almost equivalent to that of DDIM. In fact, despite having distinctly different derivation logics, the proposed method and DDIM actually agree, at least up to the 3rd order terms of h. Therefore, it is not surprising the results are similar; and the smaller h is, the closer the results are. This can be quickly verified by doing a Taylor expansion of the coefficients of eq. (15), i.e., αt−hαt and (σt−h − αt−h αt σt), w.r.t. h. Although it is tedious to perform this calculation by hand, the computer algebra systems e.g. SymPy immediately calculate it. For this computation, see § C. This finding that truncating DDIM at the 2nd or 3rd order of h yields exactly the same algorithms as the proposed Quasi-Taylor methods may be a useful insight for DDIM users, even if it does not lead them to switch the regular sampler from DDIM to Quasi-Taylor. That is, it offers an option of truncating the higher-order terms of DDIM. 6 CONCLUDING REMARKS This paper proposed a Taylor-expansion approach for diffusion generative models, particularly the Probability Flow ODE (PF-ODE) and the reverse-time SDE (R-SDE) solvers. The assumptions to derive our sampler were minimalistic, and the derivation process was straightforward. We just substituted the derivatives in the Taylor series by ideal ones. The obtained Quasi-Taylor and Quasi-Itô-Taylor samplers performed better than or on par with DDIM and Runge-Kutta methods. This fact implicitly supports the validity of our approximations. Conversely, if we could find some examples where the Quasi-Taylor methods, DDIM and RK methods gave decisively different results, we might be able to gain a deeper understanding of the structure of data manifold and the fundamentals of diffusion models by investigating the causes of discrepancy. Reproducibility Statement Pseudocodes of the proposed methods are available in § F, and the derivation of the proposed method is described in § B.1, § B.3. The experiment is based on open source code with minimal modifications to match the proposed method, and all the data used in this paper are publicly available. Experimental conditions are elaborated in § 4. Ethics Statement As a final note, negative aspects of generative models are generally pointed out, such as the risk of reproducing bias and discrimination in training data and the risk of being misused for deep fakes. Since this method only provides a solution to existing generative models, it does not take special measures against these problems. Maximum ethical care should be taken in the practical application of this method. A.3 COMPARISON OF THE EMPIRICAL SCORE FUNCTION AND THE SINGLE POINT APPROXIMATION Let us empirically validate the accuracy of single point approximation using real data as follows, • D = {MNIST (LeCun et al., 2010) 60,000 samples}, • D = {CIFAR-10 (Krizhevsky, 2009) 50,000 samples}. Since the true score function cannot be determined without knowing the true density (which will be possible with synthetic data, but discussing such data will not be very interesting here), the empirical score function was calculated using the real data D above as follows, True Score = ∇ log p(xt, t) = Ep(x0)[q(x0 | xt)∇ log p(xt, t | x0, 0)] ≈ 1|D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)] =: Empirical Score. (45) The evaluation of empirical score function using the entire dataset is unrealistic if the dataset D is large, but it is feasible if D is a small dataset like MNIST and CIFAR-10. In order to evaluate the accuracy of single point approximation, we evaluated following three metrics. • Relative L2 error between the empirical score function and∇ log p(xt, t | x0, 0), • Cosine similarity between the empirical score function and∇ log p(xt, t | x0, 0), • Entropy of q(x0 | xt). Figure 6 shows the relative L2 distance, for both datasets. Figure 7 similarly show the distribution (random 10,000 trials) of the cosine similarity, and Figure 8 shows the entropy. Dashed curves indicate the bounds evaluated in eq. (31) and eq. (32). These figures show that the range of intermediate region between Phase (1) and Phase (2) will not have impact in practical situations since we do not evaluate the neural network Sθ(·, ·) in this range so many times (i.e., ᾱt ∼ 10−3 to 10−1 ⇔ νt ∼ 0.999 to 0.9). Moreover, the approximation accuracy is still very high even in this region. Furthermore, although MNIST and CIFAR-10 are quite “low-dimensional” for real-world images, approximations are established with such high accuracy. Therefore, it is expected to be established with higher accuracy for more realistic images. B ON THE IDEAL DERIVATIVE APPROXIMATION Thus, we can assume that the single point approximation almost always holds practically. −Sθ(xt, t)√ νt model≈ ∇xt log p(xt, t) almost equal≈ ∇xt log p(xt, t | x(i)0 , 0) = − xt − √ 1− νtx(i)0 νt . Therefore, we may also expect that the similar approximation will be valid for their derivatives. Of course, strictly speaking, such an expectation is mathematically incorrect. For example, let g(x) = f(x) + ε sinωx, then the difference g(x) − f(x) = ε sinωx goes to zero as ε → 0, but the difference of derivatives g′(x)− f ′(x) = εω cosωx does not if ω →∞ faster than 1/ε. If the error between them in the Fourier domain is written as E(ω) = G(ω) − F (ω), then the L2 error between the derivatives is ‖g′(x) − f ′(x)‖22 = ‖ωE(ω)‖22 × const (Parseval’s theorem). In other words, the single point approximation does not necessarily imply the ideal derivative approximation. If it is to be mathematically rigorous, it must be supported by other nontrivial knowledge on the data manifold. This nontrivial leap is the most important “conjecture” made in this paper and its theoretical background should be more closely evaluated in the future. B.1 DERIVATION OF THE “IDEAL DERIVATIVES” Because of the discussion in § A, the true score function ∇xt log p(xt, t) is finely approximated by a single point approximation ∇xt log p(xt, t | x0, 0). Now we may also assume that the derivatives of both will also be close. In this paper, we are interested in the Taylor expansion of the following form (see also § E.1.1), ψ(xh, h) = ψ(x0, 0) + ∞∑ k=1 hk k! (∂t + a(xt, t) · ∇xt)k ψ(xt, t) ∣∣∣∣ t=0 . (46) If the function ψ(xt, t) is separable in each dimension (i.e., ∂xiψj = 0 for i 6= j), the following relation holds, (a(xt, t) · ∇xt)ψ(xt, t) = a(xt, t) ∇xt ψ(xt, t), (47) where is the element-wise product or operation. If a(xt, t) is also separable in each dimension4 the Taylor series is formally rewritten as follows, ψ(xt, t) = ψ(x0, 0) + ∞∑ k=1 tk k! ( 1∂t + a(xt, t) ∂xt )k ψ(xt, t) ∣∣∣∣ t=0 (48) where ∂xt := ∇xt is the element-wise derivative operator. This is formally the same as the 1-dim Taylor series. Therefore, it is sufficient to consider the 1-dim Taylor series first, and parallelize each dimension later. Thus the derivatives we actually need are the following two. ∂xtSθ(xt, t) = ∇xt Sθ(xt, t), ∂tSθ(xt, t) = (1∂t) Sθ(xt, t). (49) B.1.1 SPATIAL DERIVATIVE ∂xtSθ(xt, t) := ∇xt Sθ(xt, t) Let us first compute the spatial derivative of the conditional score function. (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = (∑ i ai∂xti ) xt − √ 1− νtx0√ νt 4In general, (a · ∇)2 = ( ∑ i ai∂i) 2 = ( ∑ i ai∂i)( ∑ j aj∂j) = ∑ i ai ∑ j(∂iaj + aj∂i∂j). If a is separable in each dimension, the ∂iaj(i 6= j) terms vanish, and (a · ∇)2 = ∑ i(ai∂iai + ∑ j aiaj∂i∂j). If the function ψ(xt, t) is separable in each dimension, then (a · ∇)2ψk = ∑ i(ai∂iai + ∑ j aiaj∂i∂j)ψk = (ak∂kak + a 2 k∂ 2 k)ψk. Thus we can formally write (a · ∇)2ψ = (a ∇ a + a a ∇ ∇) ψ = a (∇ a+ a ∇ ∇) ψ = a ∇ (a ∇) ψ = (a ∇ )2ψ = (a ∂x)2ψ. (Note that the operator (a · ∇) is scalar while (a ∂x) is d-dim vector.) We can similarly show (a · ∇)kψ = (a ∂x)kψ for k ≥ 3. = 1√ νt (∑ i ai∂xti ) (xt − √ 1− νtx0)1 ...(∑ i ai∂xti ) (xt − √ 1− νtx0)d = 1√ νt (∑ i ai∂xti ) (xt 1 −√1− νtx01) ...(∑ i ai∂xti ) (xt d −√1− νtx0d) = 1√ νt ( a1∂xt1 ) (xt 1 −√1− νtx01) ...( ad∂xtd ) (xt d −√1− νtx0d) = 1√ νt a1... ad = 1√ νt a = a 1√ νt 1. (50) Here, we used the notation xti to denotes the i-th component of a vector xt. Note that up to this point in the discussion, there have been no approximations, but strict ones. Now let us consider the approximation. Because of the single point approximation, we may assume that the derivative of the integrated score function will also be approximated by the derivative of the conditional score function, i.e., (a · ∇xt)(− √ νt∇xt log p(xt, t)) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)). (51) As the neural network Sθ(xt, t) is trained so that it approximates the integrated score function, we can also assume the following relation, (a · ∇xt)Sθ(xt, t) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = 1√ νt a. (52) Thus we have obtained the ideal spatial derivative of the neural network. We can also formally write the spatial derivative as follows using the above notation, a (∂xtSθ(xt, t)) = a 1√ νt 1. (53) We can also write it as ∂xtSθ(xt, t) = 1√ νt 1. (54) B.1.2 TIME DERIVATIVE −∂tSθ(xt, t) Next, let us compute −∂t(− √ νt∇xt log p(xt, t | x0, 0)). During the computation, x0 is replaced by the relation x0 = 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) . (55) We also use the following relations between νt, βt, which is immediately obtained from the definition of νt, ν̇t = (1− νt)βt. (56) Using the above information, we may compute the temporal derivative of the conditional score function as follows. − ∂t(− √ νt∇xt log p(xt, t | x0, 0)) = −∂t xt − √ 1− νtx0√ νt = − 1√ νt ( 1 2 ν̇t(1− νt)−1/2x0 ) − (xt − √ 1− νtx0) ( −1 2 ν̇tν −3/2 t ) = − ν̇t 2ν 3/2 t ( νt√ 1− νt x0 − (xt − √ 1− νtx0) ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt x0 ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) ) = − ν̇t 2ν 3/2 t (( −1 + 1 1− νt ) xt + 1 1− νt (νt∇xt log p(xt, t | x0, 0)) ) = − 1 2ν 3/2 t ν̇t 1− νt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − 1 2ν 3/2 t βt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) . (57) (Note that this calculation is exact, and no approximation is injected.) Because of the single point approximation, we may assume −∂t(− √ νt∇xt log p(xt, t)) ≈ −∂t(− √ νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) ≈ − βt 2 √ νt (xt +∇xt log p(xt, t)) , (58) and therefore, we can also assume that the temporal derivative of the neural network is approximated as −∂tSθ(xt, t) ≈ − βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) . (59) The “derivatives" have some good points. For example, the partial derivatives commute, ∂xt∂tSθ(xt, t) = ∂t∂xtSθ(xt, t). (60) B.2 COMPARISON OF THE EMPIRICAL SCORE DERIVATIVES AND IDEAL DERIVATIVES Let us empirically validate that idela approximation using real data similarly as above. However, since the equations will become very complicated if we evaluate the exact empirical score derivatives, we instead used finite differences as the ground truths. That is, let S(x, t) be the routine that computes the empirical score function as follows, S(x, t) = − √ νt |D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)], (61) and we evaluated the empirical score derivatives by the finite differences as follows5, Empirical t Deriv: ∂tS ≈ S(xt, t+ ε)− S(xt, t) ε (62) Empirical xt Deriv: (a · ∇xt)S ≈ S(xt + εa, t)− S(xt, t) ε , where a ∼ N (0, I). (63) where ε should be a sufficiently small value, and we used ε = 10−3 here. We compared these empirical derivatives with the ideal derivatives using MNIST and CIFAR-10. Ideal t Deriv: ∂tSθ = βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) = βt 2 √ νt ( xt − xt − √ 1− νtx0 νt ) Ideal xt Deriv: (a · ∇xt)Sθ = 1√ νt a As the ideal derivatives require the specific function forms of diffusion and variance schedules, we tested on following two noise schedules. Linear schedule We first tested on the linear schedule eq. (76), where β0 = 0.1 and β1 = 9.95. This is the same schedule as the one used in the main text. Figure 9 shows the relativeL2 error and the cosine similarity between the ideal t derivative eq. (21) and the empirical t derivative eq. (62), in which it is observed that they are very close when 0 / t / 0.5, while the approximation accuracy decreases as t increases. However, even in that case, there tends to be an overall positive correlation. It can also be observed that there is an error that seems to originate from the singularity of time origin when t ≈ 0. (See also § D.2.) For the x derivative (Figure 9), on the other hand, we can confirm that the errors between the ideal x derivative eq. (21) and empirical x derivative eq. (62) are generally very highly correlated, except around t ≈ 0.5. Modified tanh schedule We also tested on another noise schedule, the modified tanh schedule eq. (79) which does not have the singularity at the time origin. The parameters A, k were determined so that ν0 = 0.001 and ν1 = 0.999. Figure 11 and Figure 12 show the results. In this case, the overall trend is similar to the linear schedule, but we can observe that the singularity of the time origin of the t derivative is eliminated. 5To verify the empirical xt derivative, let us consider a simple case of three-variable function f(x, y, z). As its total derivative is df = ∂xfdx + ∂yfdy + ∂zfdz, we have f(x + a, y + b, z + c) − f(x, y, z) = (a∂x + b∂y + c∂z)f(x, y, z) for small a, b, c. Let a = εa′, b = εb′ and c = εc′, then f(x + εa′, y + εb′, z + εc′)− f(x, y, z) = ε(a′∂x + b′∂y + c′∂z)f(x, y, z). Therefore, we can write the spatial derivative as (a′∂x + b ′∂y + c ′∂z)f(x, y, z) = limε→0 1 ε (f(x+ εa′, y + εb′, z + εc′)− f(x, y, z)). B.3 THE DERIVATIVES L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) The computation of the derivative L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) does not require any particular nontrivial process. All we have to do is rewrite a term every time we encounter a derivative of Sθ(xt, t) or νt, and the rest is at the level of elementary exercises in introductory calculus. To execute this symbolic computation, the use of computer algebra systems will be a good option. It should be noted, however, that some implementation tricks to process such custom derivatives are required (in other words, the term-rewriting system should be customized). The results are shown below. Although these expressions appear complex at first glance, the code generation system can automatically generate code for such expressions. L[(−f̄[)(xt, t) = ( β2t 4 − β̇t 2 ) xt + ( β̇t 2 √ νt − β 2 t 4ν 3/2 t ) Sθ(xt, t) (64) L](−f̄])(xt, t) = ( β2t 4 − β̇t 2 ) xt + β̇t√ νt Sθ(xt, t) (65) G](−f̄])(xt, t) = ( 1 2 − 1 νt ) β 3/2 t (66) L]g(t) = − β̇t 2 √ βt (67) G]g(t) = 0. (68) We may also compute higher order derivatives, though we do not use them in this paper except L[L[(−f̄[), L[L[(−f̄[)(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt + ( β3t (−ν2t + 3νt − 3) 8ν 5/2 t + 3βtβ̇t 4ν 3/2 t − β̈t 2 √ νt ) Sθ(xt, t) (69) L]L](−f̄])(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt − β3t + 4β̈t 4 √ νt Sθ(xt, t) L]G](−f̄])(xt, t) = √ βt ν2t ( νt(2β 2 t + 3β̇t) 2 − β2t − 3ν2t β̇t 4 ) G]L](−f̄])(xt, t) = √ βt ( β2t 4 − β̇t 2 + β̇t νt ) G]G](−f̄])(xt, t) = 0 L]L]g(t) = 2βtβ̈t − β̇2t 4β 3/2 t L]G]g(t) = 0 G]L]g(t) = 0 G]G]g(t) = 0. As we can see, no factors other than integers, xt, Sθ(xt, t), νt, βt and derivatives of βt appear. This is also true for higher order derivatives, which can be easily shown. SymPy Code Snippet for Automatic Symbolic Computation of Derivatives The following code snippet is a minimalistic example of SymPy code to compute the above derivatives using the customized derivative method. We used SymPy 1.11 to test the following code snippet. from sympy import Function, symbols, sqrt, simplify x, t = symbols(’x t’) # x, t B = Function(’beta’) # βt # define customized derivatives of νt class nu(Function): def fdiff(self, argindex=1): t, = self.args return (1 - nu(t)) * B(t) # ν̇t = (1− νt)βt # define customized derivatives of Sθ(x, t) class S_theta(Function): def fdiff(self, argindex=1): x, t = self.args if argindex == 1: # ∂/∂x d = 1 / sqrt(nu(t)) elif argindex == 2: # ∂/∂t d = (x - S_theta(x, t)/sqrt(nu(t))) * B(t) / (2 * sqrt(nu(t))) return d # define f̄[ class f_flat(Function): @classmethod def eval(cls, x, t): return - B(t) * x / 2 + S_theta(x, t) * B(t) / (2 * sqrt(nu(t))) # define differential operator L[ class L_flat(Function): @classmethod def eval(cls, fxt): return -fxt.diff(t) - f_flat(x, t) * fxt.diff(x) # show each derivative print(f_flat(x, t)) print(simplify(L_flat(f_flat(x,t)))) # L[ f̄[(xt, t); see eq. (64) print(simplify(L_flat(L_flat(f_flat(x,t))))) # L[L[ f̄[(xt, t); see eq. (69), # we can similarly define f̄], L], G] and compute other derivatives. The result will look like [Out 1] − xβ(t) 2 + Sθ(x, t)β(t) 2 √ ν(t) [Out 2] − xβ 2(t) 4 + x ddtβ(t) 2 + Sθ(x, t)β 2(t) 4ν 3 2 (t) − Sθ(x, t) d dtβ(t) 2 √ ν(t) [Out 3] − xβ 3(t) 8 + 3xβ(t) ddtβ(t) 4 − x d2 dt2 β(t) 2 + Sθ(x, t)β 3(t) 8 √ ν(t) − 3Sθ(x, t)β 3(t) 8ν 3 2 (t) + 3Sθ(x, t)β 3(t) 8ν 5 2 (t) − 3Sθ(x, t)β(t) d dtβ(t) 4ν 3 2 (t) + Sθ(x, t) d2 dt2 β(t) 2 √ ν(t) and so on. Some additional coding techniques can further improve the readability of these expressions, but there will be no need to go any deeper into such subsidiary issues here. Thus obtained symbolic expressions can be automatically converted into executable code in practical programming languages including Python and C++ using a code generator, though the authors hand-coded the obtained expressions in Python for the experiments in this paper. C TRUNCATED DDIM IS EQUIVALENT TO THE QUASI-TAYLOR SAMPLER Using SymPy, we can easily compute the Taylor expansion of a given function. For example, the following code sympy.series(B(t+h), h, 0, 4) yields the result like β(t) + h d dξ1 β(ξ1) ∣∣∣∣ ξ1=t + h2 d 2 dξ21 β(ξ1) ∣∣∣ ξ1=t 2 + h3 d 3 dξ31 β(ξ1) ∣∣∣ ξ1=t 6 +O ( h4 ) . Similarly, using the relation ν̇t = (1− νt)βt, we can easily compute the Taylor expansion of νt−h as follows. sympy.series(nu(t-h), h, 0, 3) νt−h = ν(t)+h (β(t)ν(t)− β(t))+h2 β2(t)ν(t) 2 − β 2(t) 2 − ν(t) ddξ1 β(ξ1) ∣∣∣ ξ1=t 2 + d dξ1 β(ξ1) ∣∣∣ ξ1=t 2 +O (h3) Using this functionality of SymPy, we can easily compute the Taylor expansion of the DDIM (Song et al., 2020a). Let us recall that the DDIM algorithm is given by eq. (15), and using our notation α = √ 1− ν and σ = √ν, it can be written as follows, DDIM: xt−h ← √ 1− νt−h 1− νt︸ ︷︷ ︸ =:ρDDIMt,h xt + (√ νt−h − √ 1− νt−h 1− νt νt ) ︸ ︷︷ ︸ =:µDDIMt,h Sθ(xt, t). Then using SymPy, the Taylor expansion of ρDDIMt,h and µ DDIM t,h are computed as follows, ρDDIMt,h = 1 + βt 2 h− h 2 4 ( β2t 2 − β̇t ) + h3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + o(h3), (70) √ νtµ DDIM t,h = − βt 2 h+ h2 4 ( β̇t − β2t 2νt ) + h3 4 ( −β 3 t 12 + β3t 4νt − β 3 t 4ν2t + βtβ̇t 2νt − β̈t 3 ) + o(h3). (71) Although it has been known that DDIM corresponds to the Euler method up to 1st order terms (Song et al., 2020a; Salimans & Ho, 2022), this expansion gives better understanding of higher order terms. That is, these are exactly equivalent to our deterministic Quasi-Taylor sampler eq. (23) and eq. (24) up to 3rd-order terms. This fact may suggest that the assumptions behind the DDIM derivation will be logically equivalent to our assumptions of ideal derivatives. The advantage of the proposed Quasi-Taylor method is that we can decide the hyperparameter at which order the Taylor expansion is truncated. On the other hand, DDIM automatically incorporates terms of much higher order, leaving no room for order tuning. D ON THE NOISE SCHEDULE D.1 BACKGROUND: PICARD-LINDELÖF THEOREM Let us consider a 1-dim deterministic system ẋ(t) = a(x(t), t). It is well known that this ODE has a unique solution if a(x, t) is Lipschitz continuous w.r.t. x and continuous w.r.t. t (Picard-Lindelöf Theorem). Otherwise, ODEs often behave less favorably. (Similar Lipschitz conditions are also required for SDEs.) Example 1. For example, the ODE ẋ = x2, x(0) = 1 has the solution x = 1/(1− t) when t < 1, and it blows up at t = 1. It is usually impossible to consider what happens after t > 1 in ordinary contexts. Example 2. Another well-known example is ẋ = √ x, x(0) = 0. It has a solution x = t2/4, but x ≡ 0 is also a solution. It actually has infinitely many solutions x = 0 (if t ≤ t0), x = (t− t0)2/4 (if t > t0), where t0 ≥ 0 is an arbitrary constant. Example 3. Let us consider the following ODE ẋ = − t− 1 1− e−(t−1)2 x, x(0) = 1, (72) which is a simplified model of the Linear schedule eq. (76). The exact solution is as follows, x = √ e− 1√ e(t−1)2 − 1 , (73) which diverges at t = 1. In this case, a(x, t) = −x·(t−1)/(1−e−(t−1)2) is not Lipschitz continuous, as the Taylor expansion of the denominator is 1− e−(t−1)2 = (t− 1)2 +O((t− 1)4), and a(x, t) is approximately −x/(t− 1) near t = 1. In these cases, the coefficient a(·, ·) is not Lipschitz continuous. Even these seemingly simplest ODEs behave very complexly unless the coefficients are carefully designed. In PF-ODE, the Lipschitz condition is written as follows, Lip(f̄[) = ∣∣∣∣∂xt (βt2 xt − βt2√νtSθ(xt, t) )∣∣∣∣ <∞. (74) Using the ideal derivative of Sθ(xt, t), this condition translates as Lip(f̄[) = |βt(1− 1/νt)| = ∣∣∣∣ ν̇tνt ∣∣∣∣ <∞. (75) D.2 SPECIFIC SCHEDULES Including this point, the necessary conditions for a variance schedule νt will be summarized as follows. 1. ν0 ≈ 0 so that the initial density p(x0, 0) is close to the true data density. 2. νT ≈ 1 so that the terminal density p(xT , T ) is close to the Gaussian. 3. Sufficiently smooth so that βt = − ddt log(1− νt) is well defined. • In addition, βt should also be smooth so that the Taylor schemes can be used. 4. Monotonic (s < t =⇒ νs ≤ νt) to make βt non-negative. 5. Preferably, make the drift coefficient f̄[ Lipschitz continuous so that PF-ODE has a unique solution, i.e., Lip(f̄[) ≈ |ν̇t/νt| <∞. The following two scheduling functions which are common in diffusion generative models satisfy the conditions 1, 2, 4 above (the linear schedule also satisfies the 3rd condition), Linear: νt = 1− e−β0t−β1t 2 , βt = β0 + 2β1t, (76) Cosine: νt = 1− C cos2 ( π 2 t/T + ς 1 + ς ) , βt = { π T tan ( π 2 t/T+ς 1+ς ) if 0 ≤ t ≤ T ′ Θ if T ′ < t ≤ T . (77) where ς > 0 is a small constant, C = 1/ cos2(πς/2(1 + ς)) is a constant to make ν0 = 0, and the threshold constant is Θ = βT ′ . However, these common schedules do not satisfy the 5th condition that the drift coefficient f̄[ is Lipschitz continuous. Indeed, it is easily verified that limt→0 ν̇t/νt =∞ in both cases, since ν0 = 0 but ν̇0 > 0. Nevertheless, t = 0 is the only singular point, and since no function value or derivative at t = 0 is evaluated by numerical methods (except by the Runge-Kutta method), this point can practically be ignored. Note that, we can also consider some other schedule functions such as the sigmoid function and the hyperbolic tangent, which satisfy the condition 2, 3, 4, 5 but do not satisfy the 1st condition rigorously (but if ν0 is less than or equal to the level of the quantization error in the data, we may consider the first condition to be essentially satisfied), Sigmoid: νt = 1 1 + e−A(t−k) , βt = Aνt, (78) Modified Tanh: νt = tanh2(λ(t)/2), βt = λ̇(t) tanh(λ(t)/2), (79) where the parameter function λ(t) has some options, such as λ(t) = log(1 + Aekt), and A > 0, k > 0 are hyperparameters. D.3 HOW TO AVOID THE TIME ORIGIN SINGULARITY IN THE RUNGE-KUTTA METHODS When using the Heun and Classical RK4 methods, the function f̄[(xt, t) is evaluated at time t = 0. However, since the function f̄[(xt, t) contains the term proportional to 1/ √ νt, it will diverge at time t = 0 if the linear eq. (76) or cosine schedule eq. (77) is used. The simplest way to avoid this is to replace the function f̄[(x0, 0) with f̄[(xε, ε) where ε > 0 is a sufficiently small constant, only when the need to evaluate the function at time t = 0 arises. The same thing could happen at t = T if the cosine schedule and DDIM were used simultaneously, but this can be handled in the same way. If we use the sigmoid eq. (78) or modified tanh schedules, eq. (79) these problems do not occur unless the hyperparameters A and k are chosen to be very extreme values. E SUPPLEMENT ON FUNDAMENTALS For convenience, let us summarize some basics behind the ideas in this paper. The contents of this section are not particularly novel, but the authors expect that this section will give a better understanding of the ideas of this paper and the continuous-time approach to diffusion generative models. E.1 TAYLOR EXPANSION AND ITÔ-TAYLOR EXPANSION E.1.1 TAYLOR EXPANSION OF DETERMINISTIC SYSTEMS 1-dimensional case Let us first consider a 1-dim deterministic system ẋ(t) = a(x(t), t), where a(·, ·) is sufficiently smooth, and let us derive the Taylor series expression of the solution of this ODE. Let ϕ(x(t), t) be a differentiable function. Its total derivative is written as dϕ = ∂ϕ ∂t dt+ ∂ϕ ∂x dx = ∂ϕ ∂t dt+ ∂ϕ ∂x dx dt dt = ( ∂ϕ ∂t + ∂ϕ ∂x a(x, t) ) dt = ( ∂ ∂t + a(x, t) ∂ ∂x ) ︸ ︷︷ ︸ =:L[ ϕdt. (80) By integrating both sides from 0 to t, we have ϕ(x(t), t) = ϕ(x(0), 0) + ∫ t 0 (L[ϕ)(x(s), s)ds. (81) We use this formula recursively to obtain the Taylor series of the above system. Let ϕ(x(t), t) = x(t), then we have x(t) = x(0) + ∫ t 0 (L[x)(x(s), s)ds = x(0) + ∫ t 0 a(x(s), s)ds. (82) Let ϕ(x(t), t) = a(x(t), t), then we have a(x(t), t) = a(x(0), 0) + ∫ t 0 (L[a)(x(s), s)ds. (83) Using the above two
1. What is the main contribution of the paper regarding acceleration of denoising diffusion models? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical foundation and experimental results? 3. Do you have any concerns or questions about the paper's discussion of the Backward Kolmogorov Equation and its connection to time-reversal dynamics? 4. How does the paper's method compare to other approaches in the literature that accelerate diffusion models, such as knowledge distillation methods like [3] or improved samplers like [4,5]? 5. What are the limitations of the paper's method, especially regarding its deterioriation at higher step numbers, and how could these limitations be addressed in future research?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors deal with the acceleration of denoising diffusion models. In particular, they propose improved samplers with higher order in order to reduce the number of steps requires at sampling times. The acceleration proposed in that paper is described for both Ordinary Differential Equation (ODE) and Stochastic Differential Equation (SDE) flows. It is based on the higher-order integrators [1]. In order to efficiently compute the derivatives the authors identify the score ∇ log ⁡ p ( x t ) with the conditional score ∇ log ⁡ p ( x t | x 0 ) which has a tractable expression. Doing so they are able to compute efficient approximations of the derivatives. The theoretical and methodological study is complemented with experiments on CelebA 64x64 and CIFAR10. [1] Kloeden, Platen, Schurz - Numerical Solution of SDE through computer experiments Strengths And Weaknesses STRENGTHS: One of the strenght of this paper is that it introduces a novel way to sample from diffusion models using the theoretical of higher order integrator. To the best of my knowledge this approach is novel. I also found the use of the approximation of the score by the conditional score to be quite interesting. The paper is well-written and the ODE, SDE and reverse ODE, SDE are clearly introduced. WEAKNESSES: I think the authors make misleading claim regarding the Backward Kolmogorov Equation (BKE) (by the way, maybe I missed it in the main text but FPE and BKE are never defined, I just assumed that FPE was Fokker-Planck Equation and BKE was Backward Kolmogorov Equation). I could not find in [1] any reference to the backward Kolmgorov Equation but maybe I missed something here. However I disagree that the reverse-time dynamics is somehow associated with BKE. In fact FPE and BKE are dual of each other but there is no connection here with the time-reversal. The time-reversal dynamics also satisfies FPE and BKE evolutions but these are not related to the BKE of the forward process (even though one can use the BKE to establish the time-reversal SDE, see [1]). More important maybe, I don't really understand why the authors say that the derivatives of the learned score functions cannot be computed? It is easy to perform autodifferenciation w.r.t. the (one-dimensional) time variable and one can use vector-Jacobian-product to compute the term a . ∇ s θ . I might be missing something but it seems that these issues can be leveraged by the careful use of automatic differenciation. Proposition 1 is very poorly worded. What are these "many cases"? I understand that here the authors are trying to justify their method but this is too hand-wavey. As of now the statement is too imprecise. What are the edge cases? How could we extend this proposition to hold in a more general setting? The deterioriation of the method noted by the author "the proposed Quasi-Taylor methods tend to give good results around N = 16 , 20 and the FID deteriorates from there" is quite worrisome. This deterioriation which to me is a key limitation of the work should have been investigated in greater depth. The FID results provided by the authors are quite surprising. For CelebA a FID of 20 something is very large. In the original DDIM paper [2], the authors reported way better results with FID around 17 even for 10 steps and FID around 6 for 100 steps. Can the authors explain this stricking discrepancy between the announced numbers and the reported numbers in [2]? I think that the authors do not discuss important part of the literature dealing with acceleration of diffusion models. I understand that the authors are not going to compare themselves with knowledge distillation approaches like the one of [3] because these approaches are quite different from the one considered in that paper which is focused in better samplers but improved samplers have already been proposed in the literature, like [4,5] for instance. Comparisons with these methods are important and omitted here. [1] Song, Sohl-Dickstein, Kingma, Kumar, Ermon, Poole - Score-Based Generative Modeling through Stochastic Differential Equations [2] Song, Meng, Ermon - Denoising Diffusion Implicit Models [3] Luhman, Luhman - Knowledge distillation in iterative generative models for improved sampling speed [4] Liu, Ren, Lin, Zhao - Pseudo Numerical Methods for Diffusion Models on Manifolds [5] Zhang, Chen - Fast Sampling of Diffusion Models with Exponential Integrator Clarity, Quality, Novelty And Reproducibility The presentation of the paper is quite clear (with the exception of the discussion on BKE which I found misleading as emphasized before). The methodological contribution of the paper is quite interesting but I did not find the theory and experiments of the paper to be compelling. The work is quite novel and I think that the use of the approximation ∇ log ⁡ p ( x t ) with the conditional score ∇ log ⁡ p ( x t | x 0 ) could be useful. Experimental details to reproduce the introduced method are provided.
ICLR
Title Quasi-Taylor Samplers for Diffusion Generative Models based on Ideal Derivatives Abstract Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed. This paper proposes new efficient samplers based on the numerical schemes derived by the familiar Taylor expansion, which directly solves the ODE/SDE of interest. In general, it is not easy to compute the derivatives that are required in higher-order Taylor schemes, but in the case of diffusion models, this difficulty is alleviated by the trick that the authors call “ideal derivative substitution,” in which the higher-order derivatives are replaced by tractable ones. To derive ideal derivatives, the authors argue the “single point approximation,” in which the true score function is approximated by a conditional one, holds in many cases, and considered the derivatives of this approximation. Applying thus obtained new quasi-Taylor samplers to image generation tasks, the authors experimentally confirmed that the proposed samplers could synthesize plausible images in small number of NFEs, and that the performance was better or at the same level as DDIM and Runge-Kutta methods. The paper also argues the relevance of the proposed samplers to the existing ones mentioned above. 1 INTRODUCTION Generative modeling based on deep neural networks is an important research subject for both fundamental and applied purposes, and has been a major trend in machine learning studies for several years. To date, various types of neural generative models have been studied including GANs (Goodfellow et al., 2014), VAEs (Kingma et al., 2021; Kingma & Welling, 2019), normalizing flows (Rezende & Mohamed, 2015), and autoregressive models (van den Oord et al., 2016b;a). In addition to these popular models, a class of novel generative models based on the idea of iteratively refinement using the diffusion process has been rapidly gaining attention recently as a challenger that rivals the classics above (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Song & Ermon, 2020; Ho et al., 2020; Dhariwal & Nichol, 2021). The diffusion-based generative models have recently been showing impressive results in many fields including image (Ho et al., 2020; Vahdat et al., 2021; Saharia et al., 2021; Ho et al., 2021; Sasaki et al., 2021), video (Ho et al., 2022), text-to-image (Nichol et al., 2021; Ramesh et al., 2022), speech (Chen et al., 2020; 2021; Kong et al., 2021; Popov et al., 2021; Kameoka et al., 2020), symbolic music (Mittal et al., 2021), natural language (Hoogeboom et al., 2021; Austin et al., 2021), chemoinformatics (Xu et al., 2022), etc. However, while the diffusion models have good synthesis quality, it has been said that they have a fatal drawback that they often require a very large number of iterations (refinement steps) during synthesis, ranging from hundreds to a thousand. In particular, the increase in refinement steps critically reduces the synthesis speed, as each step involves at least one neural function evaluation (NFE). Therefore, it has been a common research question how to establish a systematic method to stably generate good data from diffusion models in a relatively small number of refinement steps, or NFEs in particular. From this motivation, there have already been some studies aiming at reducing the NFEs (See § 2). Among these, Probability Flow ODE (PF-ODE) (Song et al., 2020b) enable efficient and deterministic sampling, and is gaining attention. This framework has the merit of deriving a simple ODE by a straightforward conceptual manipulation of diffusion process. However, the ODE is eventually solved by using a black-box Runge-Kutta solver in the original paper, which requires several NFEs per step and is clearly costly. Another PF-ODE solver includes DDIM (Song et al., 2020a), and is also commonly used. It is certainly efficient and can generate plausible images. However, it was not originally formulated as a PF-ODE solver, and the relationship between DDIM and PF-ODE is not straightforward. From these motivations, we provide another sampler to solve the same ODE, which performs better than or on par with DDIM. The derivation outline is simple and intuitive: (1) consider the Taylor expansion of the given system, and (2) replace the derivatives in the Taylor series with appropriate functions; that’s all. The contribution of this paper would be as follows: (1) We propose novel samplers for diffusion models based on Taylor expansion of PF-ODE. They outperformed, or were on par with RungeKutta methods. (2) To derive our algorithms, we show that the derivatives of score function can be approximated by simple functions. We call this technique the ideal derivative substitution. (3) It has been known that the 1st order term of DDIM is same as the Euler method for PF-ODE. This paper gives further explanation for higher order terms of DDIM: we show that the proposed Quasi-Taylor method and DDIM are identical at least up to 3rd order terms. (4) The same idea can be naturally extended to derive a stochastic solver for a reverse-time SDE, which we call R-SDE in this paper. 2 BACKGROUND AND RELATED WORK Diffusion Process to draw a new data from a target density: Let us first briefly summarize the framework of the diffusion-based generative models. Following Song et al. (2020b), we describe the mechanisms using the language of continuous-time diffusion process for later convenience. Let us consider “particles” {xt} moving in a d-dim space obeying the following Itô diffusion, SDE: dxt = f(xt, t)dt+ g(xt, t)dBt, (1) where Bt is the d-dim Brownian motion whose temporal increments obeys the standard Gaussian. The drift f(·, ·) is d-dim vector, and the diffusion coefficient g(·, ·) is scalar. The SDE describes the microscopic dynamics of each particle. On the other hand, the “population” of the particles obeying the above SDE, i.e. density function p(xt, t | xs, s), (t > s), follows the following PDEs, which are known as Kolmogorov’s forward and backward equations (KFE and KBE); the former is also known as the Fokker-Planck equation (FPE), see § E.2, FPE: ∂tp(xt, t | xs, s) = −∇xt · f(xt, t)p(xt, t | xs, s) + ∆xt g(xt, t) 2 2 p(xt, t | xs, s), (2) KBE: −∂sp(xt, t | xs, s) = f(xs, s) · ∇xsp(xt, t | xs, s) + g(xs, s) 2 2 ∆xsp(xt, t | xs, s), (3) where ∆x := ∇x ·∇x is Laplacian. (FPE also holds for p(xt, t); consider the expectation Ep(xs,s)[·].) These PDEs enables us to understand the macroscopic behavior of the particle ensemble. For example, if f(x, t) = −∇U(x), g(x, t) = √ 2D, where U(x) a certain potential and D a constant, then we may verify that the stationary solution of FPE is p(x) ∝ e−U(x)/D. It means that we may draw a sample x that follows the stationary density by evolving the SDE over time. This technique is often referred to as the Langevin Monte Carlo method (Rossky et al., 1978; Roberts & Tweedie, 1996). Some of the diffusion generative models are based on this framework, e.g. (Song & Ermon, 2019; 2020), in which the potential gradient∇U(x) is approximated by a neural network. Another systematic approach is considering the reverse-time dynamics (Song et al., 2020b). An approach is based on KBE eq. (3). Roughly speaking, FPE gives information about the future from the initial density, while KBE gives information about what the past states were likely to be from the terminal density. Here, instead of using KBE directly, it is useful to consider a variant of it which is transformed into the form of FPE, because it has an associated SDE that enables the particle-wise backward sampling (Stratonovich, 1965; Anderson, 1982); see also § E.3.2, R-FPE: −∂sp(xs, s | xt, t) = ∇xs · f̄(xs, s)p(xs, s | xt, t) + ∆xs ḡ(xs, s) 2 2 p(xs, s | xt, t) (4) R-SDE: dxs = −f̄(xs, s)(−ds) + ḡ(xs, s)dB̄s. (5) Hereafter, let g(xt, t) = g(t) for simplicity. Then the specific forms of drift and diffusion coefficients are written as follows, R-SDE coeffs: f̄(xt, t) = f̄](xt, t) := f(xt, t)− g(t)2∇xt log p(xt, t), ḡ(t) = g(t). (6) Starting from a certain random variable xT , then by evolving the R-SDE reverse in time, we may obtain a x̂0 which follows p(x0, 0 | xT , T ) (i.e. the solution of R-FPE eq. (4)). Therefore, if the initial density p(x0, 0) of the forward dynamics eq. (2) is the true density, then we may utilize this mechanism as a generative model to draw a new sample x̂0 from it. Another approach is based on FPE eq. (2). By formally eliminating the diffusion term of the FPE for the forward process, we can derive another backward FPE (see also § E.3.1). Being diffusionfree, the backward FPE yields a deterministic ODE, which is called the Probability Flow ODE (PF-ODE) (Song et al., 2020b), and is an example of neural ODEs (Chen et al., 2018). The population density obtained by evolving this system is exactly the same as the above R-SDE. PF-ODE coeffs: f̄(xt, t) = f̄[(xt, t) := f(xt, t)− 1 2 g(t)2∇xt log p(xt, t). ḡ(t) = 0. (7) Some extensions of this framework include as follows. Dockhorn et al. (2021) introduced the velocity variable considering the Hamiltonian dynamics. Another extension is the introduction of a conditioning parameter, and guidance techniques using it (Dhariwal & Nichol, 2021; Ho & Salimans, 2021; Choi et al., 2021) to promote the dynamics to go to a specific class of images, which has achieved remarkable results in text-to-image tasks (Nichol et al., 2021; Ramesh et al., 2022). Variance-Preserving Model (VP-SDE Model): The solution of unconditioned FPE is written as the convolution with the initial density p(x0, 0) and the fundamental solution, or the heat kernel, p(xt, t | x0, 0), which is the solution of the conditional FPE under the assumption that the initial density was delta function, p(x0, 0) = δ(x0−x∗0). Although it is still intractable to solve this problem in general, a well-known exception is the (time-dependent) Ornstein-Uhlenbeck (OU) process where f(xt, t) = − 12βtxt and g(xt, t) = √ βt. βt = β(t) is a non-negative continuous function. The specific form of diffusion coefficient βt has some options: a simplest one would be the linear function, and another would be the cosine schedule proposed in (Nichol & Dhariwal, 2021); see also § D. In any cases, if it is the OU process, the heat kernel is simply written as follows, p(xt, t | x0, 0) = N (xt | √ 1− σ2t x0, σ2t I), where σ2t = 1− exp ( − ∫ t 0 βt′dt ′ ) . (8) Hereafter, we denote the noise variance by νt := σ2t . (In some literature, the signal level αt :=√ 1− σ2t is used as a basic parameter instead of the variance.) This model is referred to as the variance-preserving (VP) model by Song et al. (2020b). It has good properties such as the scale of data ‖xt‖2 is almost homogeneous, which is advantageous in neural models. However, the variance exploding (VE) model (Song et al., 2020b) in which the norm increases is also practicable, and the theory can be developed in a similar manner. Training Objective: In diffusion-based generative models, one estimates the score function ∇xt log p(xt, t) = ∇xt logEp(x0,0)[p(xt, t | x0, 0)] by a neural network Sθ(xt, t). This sort of learning has been referred to as the score matching (Hyvärinen & Dayan, 2005; Vincent, 2011). However, the exact evaluation of this training target is clearly intractable because of the expectation Ep(x0,0)[·], so it has been common to consider a Variational Bayesian surrogate loss; Ho & Salimans (2021) showed that the following loss function approximates the negative ELBO, L := E[‖−√νt∇xt log p(xt, t | x0, 0)− Sθ(xt, t)‖22] = E[‖xt− √ 1−νtx0√ νt − Sθ(xt, t)‖22] (9) = E[‖w − Sθ( √ 1− νtx0 + √ νtw, t)‖22], (10) where the expectation in eq. (10) is taken w.r.t. x0 ∼ D, w ∼ N (0, I), and t ∼ Uniform([0, T ]). Some variants of the score matching objectives are also studied. For example, Chen et al. (2020) reported that the L1 loss gave better results than the L2 loss in speech synthesis. Also, Kingma et al. (2021) argued that the weighted loss with SNR-based weights improves the performance. It should be noted that the above loss function will actually be very close to the ideal score matching loss function in practice, where the probability is not conditioned on x0, i.e., Lideal = E[‖− √ νt∇xt log p(xt, t)− Sθ(xt, t)‖22]. (11) This is because there almost always exists a point x0 on the data manifold such that∇xt log p(xt, t) ≈ ∇xt log p(xt, t | x0, 0) holds with very high accuracy in very high-dim cases, because of the wellknown “log-sum-exp ≈ max” law. For more details, see § 3.3 and § A. Sampling Schemes for R-SDE and PF-ODE: Thus obtained Sθ(xt, t) is expected to finely approximate −√νt∇xt log p(xt, t), and we may use it in eq. (5). One of the simplest numerical schemes for solving SDEs is the Euler-Maruyama method (Maruyama, 1955, Theorem. 1) as follows, and many diffusion generative models are actually using it. Euler-Maruyama: xt−h ← xt − hf̄](xt, t) + √ hg(t)w, where w ∼ N (0, I) (12) where h > 0 is the step size. The error of the Euler-Maruyama method is the order of O( √ h) in general, though it is actually O(h) in our case; this is because ∇xtg(t) = 0. As a better solver for the R-SDE, the Predictor-Corrector (PC)-based sampler was proposed in (Song et al., 2020b). The PC sampler outperformed the Predictor-only strategy, but it requires many NFEs in the correction process, so we will exclude it in our discussion. Another R-SDE solver is the one proposed by Jolicoeur-Martineau et al. (2021), whose NFE per refinement step is 2. On the other hand, there are also deterministic samplers for PF-ODE eqs. (5), (7) as follows, Euler: xt−h ← xt − hf̄[(xt, t) (13) Runge-Kutta: xt−h ← xt − h ∑m i=1 biki, where ki = f̄[(xt − h ∑i−1 j=1 aijkj , t− hci) (14) where {aij}, {bi}, {ci} are coefficients of the Runge-Kutta (RK) method (see § E.5). The error of the Euler method is O(h), and that of the RK method is O(hp), p ≤ m in general (Press et al., 2007, § 16). Another deterministic sampler is DDIM (Song et al., 2020a, Eq. (13)), and is also understood as a PF-ODE solver (Salimans & Ho, 2022). Its NFE per step is only 1, and is capable of efficiently generate samples. DDIM: xt−h ← αt−hαt xt + ( σt−h − αt−hαt σt ) Sθ(xt, t). (15) In addition, as a concurrent work as ours, Lu et al. (2022) proposed the DPM-solver, which is based on the Taylor expansion of PF-ODE. However, as the gradient is evaluated using several different points, the NFE per step is greater than 1 in general. Liu et al. (2022) proposed a sampler based on the linear multi-step method, in which the NFE/step is reduced to 1 except initial 3 steps. Another PF-ODE solver is the DEIS (Zhang & Chen, 2022) which is based on the exponential integrator with some non-trivial approximations such as the polynomial interpolation of score function. Other techniques that aimed to make sampling faster include as follows. Song & Ermon (2020) proposed a variety of techniques to accelerate the sampling. Watson et al. (2021) proposed a DP-based optimization method to tune noise schedules for faster sampling. Luhman & Luhman (2021) and Salimans & Ho (2022) proposed distilling the pretrained teacher model to a student model that can predict teacher’s several steps in a single step, which is efficient during the sampling but extra training for distillation is required. Bao et al. (2022a;b) derived some analytic expressions of reverse dynamics to enable faster sampling. 3 PROPOSED METHOD: QUASI-TAYLOR SAMPLERS 3.1 MOTIVATION: HIGHER-ORDER STRAIGHTFORWARD SOLVERS FOR R-SDE AND PF-ODE As mentioned above, DDIM already exists as an efficient solver for PF-ODE, but it can only be considered a PF-ODE solver up to first-order terms (Song et al., 2020a; Salimans & Ho, 2022), and it would not be clear enough whether it can be considered a higher-order solver for PF-ODE. Some other techniques (Lu et al., 2022; Liu et al., 2022; Zhang & Chen, 2022) were designed as higher-order PF-ODE solvers, though their derivations are rather sophisticated and less simple. Since PF-ODE and R-SDE provide the basis for the diffusion generative models, it would be beneficial to develop samplers that directly solve them through intuitive and straightforward arguments. From these motivations, we propose a simple but efficient sampler based on the Taylor expansion, a very basic technique that is familiar to many researchers and practitioners. In general, Taylor methods are not very popular as numerical schemes because they require higher-order derivatives, which are not always tractable. However, in diffusion models, the derivatives are easily and effectively evaluated, albeit approximately. The validity of this approximation requires some consideration (see § A, § B), but once accepted, an efficient sampler can be derived simply by substituting this approximation formula into the Taylor series. This section describes the details of the idea, and derives solvers for both PF-ODE and R-SDE. Entire sampling procedures are summarized in § F. 3.2 TAYLOR SCHEME FOR ODE AND ITÔ-TAYLOR SCHEME FOR SDE Taylor Scheme for Deterministic Systems For simplicity, we consider the 1-dim case here, but we can easily generalized it to multidimensional cases. (See § E.1.1.) Given a ODE ẋt = a(xt, t), where the function a is sufficiently smooth, then we can consider the Taylor expansion of it, using a differential operator L[ := ( ∂t + a(t, xt)∂xt ) . We can write the Taylor expansion of the path xt as follows. Ignoring o(hp) terms of the series, we obtain a numerical scheme of order p. xt+h = xt + ha(xt, t) + h2 2! L[a(xt, t) + h3 3! L2[a(xt, t) + · · · . (16) Itô-Taylor Scheme for Stochastic Systems In stochastic systems, the Taylor expansion requires modifications because of the relation E[dB2t ] = dt. If xt obeys a stochastic system dxt = a(xt, t)dt+ b(xt, t)dBt, then the path is written in a stochastic version of Taylor-like series, which is often called the Itô-Taylor expansion, a.k.a. Wagner-Platen expansion (Platen & Wagner, 1982);(Kloeden et al., 1994, § 2.3.B);(Särkkä & Solin, 2019, § 8.2). The Itô-Taylor expansion is based on the following differential operators L], G], which are based on Itô’s formula (Itô, 1944). L] := ∂t + a(x, t)∂x + 1 2 b(x, t)2∂2x, G] := b(x, t)∂x (17) In (Kloeden & Platen, 1992), a number of higher order numerical schemes for SDEs based on the Itô-Taylor expansion are presented. One of the simplest of them is as follows. See also § E.1.2. Theorem 1 (Kloeden & Platen (1992, § 14.2): An Itô-Taylor scheme of weak order β = 2). Let xt obeys the above SDE, and let the differential operators L], G] be given by eq. (17). Then, the following numerical scheme weakly converges with the order of β = 2 (see § E.4). Furthermore, in a special case where G2]b ≡ 0, the strong γ = 1.5 convergence is also guaranteed (Kloeden & Platen, 1992, § 10.4). xt+h ← xt + ha+ w̃tb+ w̃2t − h 2 G]b+ h2 2 L]a+ (w̃th− z̃t)L]b+ z̃tG]a (18) where w̃t = √ hwt, z̃t = h √ hzt are correlated Gaussian random variables, and wt, zt are given by wt = u1 and zt = 12u1 + 1 2 √ 3 u2, where u1, u2 ∼ N (0, 1) (i.i.d.). The notations a, L]a, etc. are the abbreviations for a(xt, t), (L]a)(xt, t), etc. 3.3 SINGLE POINT APPROXIMATION OF THE SCORE FUNCTION Before proceeding, let us introduce the single point approximation of score function that ∇xt log p(xt, t) almost certainly has a some point x0 on the data manifold such that the following approximation holds, ∇xt log p(xt, t) = ∇xt log ∫ p(xt, t | x0, 0)p(x0, 0)dx0 ≈ ∇xt log p(xt, t | x0, 0). (19) To date, this approximation has often been understood as a tractable variational surrogate. However, the error between the integral and the single point approximation is actually very small in practical scenarios. More specifically, the following facts can be shown under some assumptions. 1. The relative L2 distance between ∇xt log p(xt, t) and ∇xt log p(xt, t | x0, 0) is bounded above by √ (1− νt)/νt for any point x0 on the “data manifold” in practical scenarios. 2. When the noise level is low νt ≈ 0, and the data space is sufficiently high-dimensional, the distant points far from xt do not contribute to the integral. If the data manifold is locally a k-dim subspace of the entire d-dim data space, where 1 k d, then the relative L2 distance is bounded above by around 2 √ k/d. Of course, the single point approximation is not always valid. In fact, the approximation tends to break down when the noise level νt is around 0.9 (SNR = (1− νt)/νt is around 0.1). In this region, the single point approximation can deviates from the true gradient by about 20% in some cases. Conversely, however, it would be also said that the error is as small as this level even in the worst empirical cases. For more details on this approximation, see § A. 3.4 IDEAL DERIVATIVE SUBSTITUTION In order to adopt the above Taylor schemes to our problem setting where the base SDE is eq. (5), and f̄], f̄[ are given by eqs. (6), (7), we need to consider the following differential operators. Note that the time evolves backward in time in our case, the temporal derivative should be −∂t, L[ = −∂t − ( f̄[(xt, t) · ∇xt ) , L] = −∂t − ( f̄](xt, t) · ∇xt ) + βt 2 ∆xt , G] = √ βt (1 · ∇xt) , where f̄[(xt, t) = − βt 2 xt + βt 2 √ νt Sθ(xt, t), f̄](xt, t) = − βt 2 xt + βt√ νt Sθ(xt, t). (20) It is not easy in general to evaluate expressions involving such many derivatives. Indeed, for example, L[(−f̄[) has the derivatives of the learned score function, viz. ∂tSθ(xt, t) and (• · ∇xt)Sθ(xt, t), which are costly to evaluate exactly, whether in approaches based on finite differences (as in (Lu et al., 2022)), back-propagation, or the JAX paradigm (Bradbury et al., 2018), because they eventually require extra evaluation of a deeply nested function other than Sθ(xt, t), and extra memory consumption. Fortunately, however, by using the trick which the authors call the “ideal derivative substitution", we may write all of the derivatives as a simple combination of known values, only consisting of xt,Sθ(xt, t), νt, βt and derivatives of βt, and no extra computation is needed. Since the score function has a single point approximation eq. (19) we may assume that the derivatives should ideally hold following equalities. For derivation, see § B.1. Conjecture 1 (Ideal Derivatives). Under assumptions in § A — i.e. the data space Rd is sufficiently high dimensional d 1, the data manifoldM⊂ Rd is also sufficiently high dimensional but much smaller than the entire space (1 dimM d),M is bounded,M is sufficiently smooth locally, and the variance parameter νt is close to 0 or 1; — then it is likely that the following approximations hold, where a ∈ Rd is an arbitrary vector. We call them the “ideal derivatives”. (a · ∇xt)Sθ(xt, t) = 1√ νt a, −∂tSθ(xt, t) = − βt 2 √ νt ( xt − Sθ(xt, t)√ νt ) . (21) To confirm the accuracy of this approximation, we compared empirical and ideal derivatives using MNIST (LeCun et al., 2010) and CIFAR10 (Krizhevsky, 2009). As a result, it was confirmed that the approximation of spatial derivative, i.e. (a · ∇), is usually very accurate; the cosine similarity between the empirical and ideal derivatives is nearly always > 0.99 (Figure 10). On the other hand, for the time derivative ∂t, it was confirmed that it is quite accurate when the time parameter t (and the variance νt) are small, but the error increases when the time parameter t (and the variance νt) become larger (Figure 9). See § B.2 for more details. 3.5 QUASI-TAYLOR AND QUASI-ITÔ-TAYLOR SCHEMES WITH IDEAL DERIVATIVES As we can see in § B.2, the ideal derivative approximation is sometimes very accurate while sometimes not. In any case, however, the error in the ideal derivative only affects the second or higher order terms of Taylor series, and it will not be the dominant error in the whole. As there is an overall correlation between the true and ideal derivatives, the advantages will outweigh the disadvantages on average, and we can regularly use this approximation on a speculative basis, even though there exist some cases where the approximation is not accurate. If we accept the ideal derivative approximation, we can formally compute the symbolic expressions for the derivatives L[(−f̄[), L](−f̄]), L](g), G](−f̄]) and G](g) that appear in the Taylor and ItôTaylor series by routine calculations, which can be easily automated by computer algebra systems such as SymPy (Meurer et al., 2017) as shown in § B.3. By substituting thus obtained symbolic expressions into the above Taylor series, we can derive Taylor schemes for both PF-ODE and R-SDE as follows. Algorithm 1 (Quasi-Taylor Sampler with Ideal Derivatives for PF-ODE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ [ t,hxt + µ [ t,hSθ(xt, t)/ √ νt,where (22) ρ[t,h = 1 + βth 2 + h2 4 ( β2t 2 − β̇t ) + h 3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + · · · , (23) µ[t,h = −βth2 + h 2 4 ( β̇t − β 2 t 2νt ) + h 3 4 ( β3t (−ν 2 t+3νt−3) 12ν2t + βtβ̇t2νt − β̈t 3 ) + · · · . (24) Using terms up to O(h2), the sampler will have 2nd-order convergence (henceforth referred to as Taylor 2nd), and using terms up to O(h3), the sampler will 3rd-order convergent (similarly, Taylor 3rd). If we use up to the O(h) terms, the algorithm is same as the Euler method. Algorithm 2 (Quasi-Itô-Taylor Sampler with Ideal Derivatives for R-SDE). Starting from a Gaussian noise xT ∼ N (0, I), iterate the following refinement steps until x0 is obtained. xt−h = ρ ] t,hxt + µ ] t,hSθ(xt, t)/ √ νt + n ] t,h,where (25) ρ]t,h = 1 + βt 2 h+ h2 4 ( β2t 2 − β̇t ) , µ]t,h = −βth+ β̇th 2 2 , (26) n]t,h = √ βt √ hwt + h 3/2 ( − β̇t 2 √ βt (wt − zt) + β 3/2 t (νt−2) 2νt zτ ) . (27) The Gaussian variables wt and zt have dimension-wise correlations, and each dimension is sampled similarly to Theorem 1. Computation Cost: At first glance, these algorithms may appear to be very complex. However, the computational complexity hardly increases compared to the Euler or Euler-Maruyama methods, because almost all of the computational cost is accounted for by the neural network Sθ(xt, t), and the costs for scalar values ρ•t,h, µ • t,h and noise generation n ] t,h are almost negligible. It should also be noted that these scalar values can be pre-computed and stored in the memory before synthesis. Thus the computational complexity of these methods are practically equal to Euler, Euler-Maruyama, and DDIM methods. Error from the Exact Solution of PF-ODE: The numerical error of the Quasi-Taylor method from the exact solution increases depending on the following factors: (1) The truncation error of the Taylor series in each step, i.e. O(hp+1), (2) The number of the steps i.e. O(1/h), (3) The training and generalization error of the score function, i.e. ≈ L, and (4) The average error between the true and ideal derivatives of the score function =: ‖δ‖. If the factors 3 and 4 could be zero, then the numerical error is the order of O(hp). Otherwise, the expected numerical error is roughly evaluated as follows, error = O ( h−1(hL+ h2(L+ ‖δ‖) + h3(L+ ‖δ‖) + · · ·+ hp+1) ) = O ( L+ h(L+ ‖δ‖) + h2(L+ ‖δ‖) + · · ·+ hp ) . (28) That is, the error of Euler method is O(L+ h), the Heun method (2nd order Runge-Kutta) will be O(L+hL+h2), and the Taylor-2nd method is O(L+h(L+‖δ‖)+h2). As long as L, ‖δ‖ > 0, the predominant O(h) term will not disappear. Therefore, the overall order of the error will not decrease even if we increase the order of Taylor series greater than p ≥ 3. Nevertheless, beyond such an order evaluation, specific coefficients in higher order terms can still affect the performance, which should be validated empirically. 4 IMAGE SYNTHESIS EXPERIMENT Experimental Configuration: In this section, we conduct experiments to verify the effectiveness of the methods developed in this paper. Specifically, we compare the performance of the Euler scheme eq. (13), Taylor 2nd & Taylor 3rd (Alg. 1), DDIM (Song et al., 2020a), and the Runge Kutta methods (Heun and RK4 § E.5; these are less efficient than others because of NFEs per step) for PF-ODE, as well as the Euler-Maruyama scheme eq. (12) and Itô-Taylor (Alg. 2) for R-SDE. The datasets we used were CIFAR-10 (32× 32) (Krizhevsky, 2009) and CelebA (64× 64) (Liu et al., 2015). The network structure was not novel but was based on an existing open source implementation; we used the “NCSN++” implemented in the official PyTorch code by Song et al. (2020b). The network consisted of 4 levels of resolution, with the feature dimension of each level being 128 → 128 → 256→ 256→ 256. Each level consisted of BigGAN-type ResBlocks, and the number of ResBlocks in each level was 8 (CIFAR-10) and 4 (CelebA). The loss function we used was the unweighted L2 loss similarly to (Ho et al., 2020). The optimizer was Adam (Kingma & Ba, 2014). The machine used for training was an in-house Linux server dedicated to medium-scale machine learning training with four GPUs (NVIDIA Tesla V100). The batch size was 256. The number of training steps was 0.1 M steps, and the training took about a day for each dataset. The noising schedule was also the same as the existing one, the default configuration of VP-SDE (Song et al., 2020b): βt = 0.1 + 19.9t and νt = 1− exp(−0.1t−9.95t2) eq. (76). The integration duration was T = 1, and the step size h was constant, i.e. h = T/N where N is the number of refinement steps. As a quality assessment metric, we used the Fréchet Inception Distance (FID) (Heusel et al., 2017). To evaluate FIDs, we used the pretrained Inception v3 checkpoint (Szegedy et al., 2016), and resized all images to 299× 299× 3 by bilinear interpolation before feeding them to the Inception network. For each condition, 10,000 images were randomly generated to compute the FID score. Note that in this experiment, the computational resources for training were limited, and training was stopped before it fully converged (only 0.1 M steps, while in some other papers the number of training steps was e.g. 1.3 M steps in (Song et al., 2020b)). Therefore, it would be necessary to observe relative comparisons between samplers rather than directly comparing these FID value to those presented in other papers. Results: Figure 1 and Figure 2 show random samples for each sampler. More examples are available in § G. The deterministic samplers considered in this paper generated plausible images much faster than the vanilla Euler-Maruyama sampler. Figure 3a and Figure 3b reports the FID scores. From these figures, the following observations can be made. First, the proposed Quasi-Taylor methods have about the same or slightly better than DDIM. The reason for this is discussed in the next section § 5. We also found that the Runge-Kutta methods reduces FID in fewer steps overall. However, they also hit bottom faster. This may be due to the effect of the singularity at the time origin (see § D) in the final step. (This can be seen in Figure 16. In the second right column, the Runge-Kutta methods produce images similar to the other deterministic samplers, but the rightmost ones seem to be slightly noisier than the others). Even though the ideal derivatives are only approximations and contain some errors, the convergence destinations of Quasi-Taylor methods were almost the same as the Runge-Kutta methods. This suggests that the error in the ideal derivatives is actually hardly a problem, because in regions where the approximation error is large, the state xt is noisy to begin with (e.g. left 2/3 figures in Figure 16), and the approximation error is negligible compared to the noise that was originally there. The proposed stochastic sampler (Itô-Taylor) also showed sufficiently competitive results, in terms of both FID scores and visual impression. Comparison of the figures in § G (e.g. Figure 21) confirms that the Itô-Taylor method empirically reaches almost the same target as Euler-Maruyama method much more accurately, and it could be expected to be a safe alternative to Euler-Maruyama method when stochastic sampling is important. 5 DISCUSSION: RELATIONSHIP WITH DDIM In the above experiment, the performance of the proposed Quasi-Taylor methods are found to be almost equivalent to that of DDIM. In fact, despite having distinctly different derivation logics, the proposed method and DDIM actually agree, at least up to the 3rd order terms of h. Therefore, it is not surprising the results are similar; and the smaller h is, the closer the results are. This can be quickly verified by doing a Taylor expansion of the coefficients of eq. (15), i.e., αt−hαt and (σt−h − αt−h αt σt), w.r.t. h. Although it is tedious to perform this calculation by hand, the computer algebra systems e.g. SymPy immediately calculate it. For this computation, see § C. This finding that truncating DDIM at the 2nd or 3rd order of h yields exactly the same algorithms as the proposed Quasi-Taylor methods may be a useful insight for DDIM users, even if it does not lead them to switch the regular sampler from DDIM to Quasi-Taylor. That is, it offers an option of truncating the higher-order terms of DDIM. 6 CONCLUDING REMARKS This paper proposed a Taylor-expansion approach for diffusion generative models, particularly the Probability Flow ODE (PF-ODE) and the reverse-time SDE (R-SDE) solvers. The assumptions to derive our sampler were minimalistic, and the derivation process was straightforward. We just substituted the derivatives in the Taylor series by ideal ones. The obtained Quasi-Taylor and Quasi-Itô-Taylor samplers performed better than or on par with DDIM and Runge-Kutta methods. This fact implicitly supports the validity of our approximations. Conversely, if we could find some examples where the Quasi-Taylor methods, DDIM and RK methods gave decisively different results, we might be able to gain a deeper understanding of the structure of data manifold and the fundamentals of diffusion models by investigating the causes of discrepancy. Reproducibility Statement Pseudocodes of the proposed methods are available in § F, and the derivation of the proposed method is described in § B.1, § B.3. The experiment is based on open source code with minimal modifications to match the proposed method, and all the data used in this paper are publicly available. Experimental conditions are elaborated in § 4. Ethics Statement As a final note, negative aspects of generative models are generally pointed out, such as the risk of reproducing bias and discrimination in training data and the risk of being misused for deep fakes. Since this method only provides a solution to existing generative models, it does not take special measures against these problems. Maximum ethical care should be taken in the practical application of this method. A.3 COMPARISON OF THE EMPIRICAL SCORE FUNCTION AND THE SINGLE POINT APPROXIMATION Let us empirically validate the accuracy of single point approximation using real data as follows, • D = {MNIST (LeCun et al., 2010) 60,000 samples}, • D = {CIFAR-10 (Krizhevsky, 2009) 50,000 samples}. Since the true score function cannot be determined without knowing the true density (which will be possible with synthetic data, but discussing such data will not be very interesting here), the empirical score function was calculated using the real data D above as follows, True Score = ∇ log p(xt, t) = Ep(x0)[q(x0 | xt)∇ log p(xt, t | x0, 0)] ≈ 1|D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)] =: Empirical Score. (45) The evaluation of empirical score function using the entire dataset is unrealistic if the dataset D is large, but it is feasible if D is a small dataset like MNIST and CIFAR-10. In order to evaluate the accuracy of single point approximation, we evaluated following three metrics. • Relative L2 error between the empirical score function and∇ log p(xt, t | x0, 0), • Cosine similarity between the empirical score function and∇ log p(xt, t | x0, 0), • Entropy of q(x0 | xt). Figure 6 shows the relative L2 distance, for both datasets. Figure 7 similarly show the distribution (random 10,000 trials) of the cosine similarity, and Figure 8 shows the entropy. Dashed curves indicate the bounds evaluated in eq. (31) and eq. (32). These figures show that the range of intermediate region between Phase (1) and Phase (2) will not have impact in practical situations since we do not evaluate the neural network Sθ(·, ·) in this range so many times (i.e., ᾱt ∼ 10−3 to 10−1 ⇔ νt ∼ 0.999 to 0.9). Moreover, the approximation accuracy is still very high even in this region. Furthermore, although MNIST and CIFAR-10 are quite “low-dimensional” for real-world images, approximations are established with such high accuracy. Therefore, it is expected to be established with higher accuracy for more realistic images. B ON THE IDEAL DERIVATIVE APPROXIMATION Thus, we can assume that the single point approximation almost always holds practically. −Sθ(xt, t)√ νt model≈ ∇xt log p(xt, t) almost equal≈ ∇xt log p(xt, t | x(i)0 , 0) = − xt − √ 1− νtx(i)0 νt . Therefore, we may also expect that the similar approximation will be valid for their derivatives. Of course, strictly speaking, such an expectation is mathematically incorrect. For example, let g(x) = f(x) + ε sinωx, then the difference g(x) − f(x) = ε sinωx goes to zero as ε → 0, but the difference of derivatives g′(x)− f ′(x) = εω cosωx does not if ω →∞ faster than 1/ε. If the error between them in the Fourier domain is written as E(ω) = G(ω) − F (ω), then the L2 error between the derivatives is ‖g′(x) − f ′(x)‖22 = ‖ωE(ω)‖22 × const (Parseval’s theorem). In other words, the single point approximation does not necessarily imply the ideal derivative approximation. If it is to be mathematically rigorous, it must be supported by other nontrivial knowledge on the data manifold. This nontrivial leap is the most important “conjecture” made in this paper and its theoretical background should be more closely evaluated in the future. B.1 DERIVATION OF THE “IDEAL DERIVATIVES” Because of the discussion in § A, the true score function ∇xt log p(xt, t) is finely approximated by a single point approximation ∇xt log p(xt, t | x0, 0). Now we may also assume that the derivatives of both will also be close. In this paper, we are interested in the Taylor expansion of the following form (see also § E.1.1), ψ(xh, h) = ψ(x0, 0) + ∞∑ k=1 hk k! (∂t + a(xt, t) · ∇xt)k ψ(xt, t) ∣∣∣∣ t=0 . (46) If the function ψ(xt, t) is separable in each dimension (i.e., ∂xiψj = 0 for i 6= j), the following relation holds, (a(xt, t) · ∇xt)ψ(xt, t) = a(xt, t) ∇xt ψ(xt, t), (47) where is the element-wise product or operation. If a(xt, t) is also separable in each dimension4 the Taylor series is formally rewritten as follows, ψ(xt, t) = ψ(x0, 0) + ∞∑ k=1 tk k! ( 1∂t + a(xt, t) ∂xt )k ψ(xt, t) ∣∣∣∣ t=0 (48) where ∂xt := ∇xt is the element-wise derivative operator. This is formally the same as the 1-dim Taylor series. Therefore, it is sufficient to consider the 1-dim Taylor series first, and parallelize each dimension later. Thus the derivatives we actually need are the following two. ∂xtSθ(xt, t) = ∇xt Sθ(xt, t), ∂tSθ(xt, t) = (1∂t) Sθ(xt, t). (49) B.1.1 SPATIAL DERIVATIVE ∂xtSθ(xt, t) := ∇xt Sθ(xt, t) Let us first compute the spatial derivative of the conditional score function. (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = (∑ i ai∂xti ) xt − √ 1− νtx0√ νt 4In general, (a · ∇)2 = ( ∑ i ai∂i) 2 = ( ∑ i ai∂i)( ∑ j aj∂j) = ∑ i ai ∑ j(∂iaj + aj∂i∂j). If a is separable in each dimension, the ∂iaj(i 6= j) terms vanish, and (a · ∇)2 = ∑ i(ai∂iai + ∑ j aiaj∂i∂j). If the function ψ(xt, t) is separable in each dimension, then (a · ∇)2ψk = ∑ i(ai∂iai + ∑ j aiaj∂i∂j)ψk = (ak∂kak + a 2 k∂ 2 k)ψk. Thus we can formally write (a · ∇)2ψ = (a ∇ a + a a ∇ ∇) ψ = a (∇ a+ a ∇ ∇) ψ = a ∇ (a ∇) ψ = (a ∇ )2ψ = (a ∂x)2ψ. (Note that the operator (a · ∇) is scalar while (a ∂x) is d-dim vector.) We can similarly show (a · ∇)kψ = (a ∂x)kψ for k ≥ 3. = 1√ νt (∑ i ai∂xti ) (xt − √ 1− νtx0)1 ...(∑ i ai∂xti ) (xt − √ 1− νtx0)d = 1√ νt (∑ i ai∂xti ) (xt 1 −√1− νtx01) ...(∑ i ai∂xti ) (xt d −√1− νtx0d) = 1√ νt ( a1∂xt1 ) (xt 1 −√1− νtx01) ...( ad∂xtd ) (xt d −√1− νtx0d) = 1√ νt a1... ad = 1√ νt a = a 1√ νt 1. (50) Here, we used the notation xti to denotes the i-th component of a vector xt. Note that up to this point in the discussion, there have been no approximations, but strict ones. Now let us consider the approximation. Because of the single point approximation, we may assume that the derivative of the integrated score function will also be approximated by the derivative of the conditional score function, i.e., (a · ∇xt)(− √ νt∇xt log p(xt, t)) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)). (51) As the neural network Sθ(xt, t) is trained so that it approximates the integrated score function, we can also assume the following relation, (a · ∇xt)Sθ(xt, t) ≈ (a · ∇xt)(− √ νt∇xt log p(xt, t | x0, 0)) = 1√ νt a. (52) Thus we have obtained the ideal spatial derivative of the neural network. We can also formally write the spatial derivative as follows using the above notation, a (∂xtSθ(xt, t)) = a 1√ νt 1. (53) We can also write it as ∂xtSθ(xt, t) = 1√ νt 1. (54) B.1.2 TIME DERIVATIVE −∂tSθ(xt, t) Next, let us compute −∂t(− √ νt∇xt log p(xt, t | x0, 0)). During the computation, x0 is replaced by the relation x0 = 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) . (55) We also use the following relations between νt, βt, which is immediately obtained from the definition of νt, ν̇t = (1− νt)βt. (56) Using the above information, we may compute the temporal derivative of the conditional score function as follows. − ∂t(− √ νt∇xt log p(xt, t | x0, 0)) = −∂t xt − √ 1− νtx0√ νt = − 1√ νt ( 1 2 ν̇t(1− νt)−1/2x0 ) − (xt − √ 1− νtx0) ( −1 2 ν̇tν −3/2 t ) = − ν̇t 2ν 3/2 t ( νt√ 1− νt x0 − (xt − √ 1− νtx0) ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt x0 ) = − ν̇t 2ν 3/2 t ( −xt + 1√ 1− νt 1√ 1− νt (xt + νt∇xt log p(xt, t | x0, 0)) ) = − ν̇t 2ν 3/2 t (( −1 + 1 1− νt ) xt + 1 1− νt (νt∇xt log p(xt, t | x0, 0)) ) = − 1 2ν 3/2 t ν̇t 1− νt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − 1 2ν 3/2 t βt (νtxt + νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) . (57) (Note that this calculation is exact, and no approximation is injected.) Because of the single point approximation, we may assume −∂t(− √ νt∇xt log p(xt, t)) ≈ −∂t(− √ νt∇xt log p(xt, t | x0, 0)) = − βt 2 √ νt (xt +∇xt log p(xt, t | x0, 0)) ≈ − βt 2 √ νt (xt +∇xt log p(xt, t)) , (58) and therefore, we can also assume that the temporal derivative of the neural network is approximated as −∂tSθ(xt, t) ≈ − βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) . (59) The “derivatives" have some good points. For example, the partial derivatives commute, ∂xt∂tSθ(xt, t) = ∂t∂xtSθ(xt, t). (60) B.2 COMPARISON OF THE EMPIRICAL SCORE DERIVATIVES AND IDEAL DERIVATIVES Let us empirically validate that idela approximation using real data similarly as above. However, since the equations will become very complicated if we evaluate the exact empirical score derivatives, we instead used finite differences as the ground truths. That is, let S(x, t) be the routine that computes the empirical score function as follows, S(x, t) = − √ νt |D| ∑ x0∈D [q(x0 | xt)∇ log p(xt, t | x0, 0)], (61) and we evaluated the empirical score derivatives by the finite differences as follows5, Empirical t Deriv: ∂tS ≈ S(xt, t+ ε)− S(xt, t) ε (62) Empirical xt Deriv: (a · ∇xt)S ≈ S(xt + εa, t)− S(xt, t) ε , where a ∼ N (0, I). (63) where ε should be a sufficiently small value, and we used ε = 10−3 here. We compared these empirical derivatives with the ideal derivatives using MNIST and CIFAR-10. Ideal t Deriv: ∂tSθ = βt 2 √ νt ( xt − 1√ νt Sθ(xt, t) ) = βt 2 √ νt ( xt − xt − √ 1− νtx0 νt ) Ideal xt Deriv: (a · ∇xt)Sθ = 1√ νt a As the ideal derivatives require the specific function forms of diffusion and variance schedules, we tested on following two noise schedules. Linear schedule We first tested on the linear schedule eq. (76), where β0 = 0.1 and β1 = 9.95. This is the same schedule as the one used in the main text. Figure 9 shows the relativeL2 error and the cosine similarity between the ideal t derivative eq. (21) and the empirical t derivative eq. (62), in which it is observed that they are very close when 0 / t / 0.5, while the approximation accuracy decreases as t increases. However, even in that case, there tends to be an overall positive correlation. It can also be observed that there is an error that seems to originate from the singularity of time origin when t ≈ 0. (See also § D.2.) For the x derivative (Figure 9), on the other hand, we can confirm that the errors between the ideal x derivative eq. (21) and empirical x derivative eq. (62) are generally very highly correlated, except around t ≈ 0.5. Modified tanh schedule We also tested on another noise schedule, the modified tanh schedule eq. (79) which does not have the singularity at the time origin. The parameters A, k were determined so that ν0 = 0.001 and ν1 = 0.999. Figure 11 and Figure 12 show the results. In this case, the overall trend is similar to the linear schedule, but we can observe that the singularity of the time origin of the t derivative is eliminated. 5To verify the empirical xt derivative, let us consider a simple case of three-variable function f(x, y, z). As its total derivative is df = ∂xfdx + ∂yfdy + ∂zfdz, we have f(x + a, y + b, z + c) − f(x, y, z) = (a∂x + b∂y + c∂z)f(x, y, z) for small a, b, c. Let a = εa′, b = εb′ and c = εc′, then f(x + εa′, y + εb′, z + εc′)− f(x, y, z) = ε(a′∂x + b′∂y + c′∂z)f(x, y, z). Therefore, we can write the spatial derivative as (a′∂x + b ′∂y + c ′∂z)f(x, y, z) = limε→0 1 ε (f(x+ εa′, y + εb′, z + εc′)− f(x, y, z)). B.3 THE DERIVATIVES L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) The computation of the derivative L[(−f̄[), L](−f̄]), L](g), G](−f̄]), G](g) does not require any particular nontrivial process. All we have to do is rewrite a term every time we encounter a derivative of Sθ(xt, t) or νt, and the rest is at the level of elementary exercises in introductory calculus. To execute this symbolic computation, the use of computer algebra systems will be a good option. It should be noted, however, that some implementation tricks to process such custom derivatives are required (in other words, the term-rewriting system should be customized). The results are shown below. Although these expressions appear complex at first glance, the code generation system can automatically generate code for such expressions. L[(−f̄[)(xt, t) = ( β2t 4 − β̇t 2 ) xt + ( β̇t 2 √ νt − β 2 t 4ν 3/2 t ) Sθ(xt, t) (64) L](−f̄])(xt, t) = ( β2t 4 − β̇t 2 ) xt + β̇t√ νt Sθ(xt, t) (65) G](−f̄])(xt, t) = ( 1 2 − 1 νt ) β 3/2 t (66) L]g(t) = − β̇t 2 √ βt (67) G]g(t) = 0. (68) We may also compute higher order derivatives, though we do not use them in this paper except L[L[(−f̄[), L[L[(−f̄[)(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt + ( β3t (−ν2t + 3νt − 3) 8ν 5/2 t + 3βtβ̇t 4ν 3/2 t − β̈t 2 √ νt ) Sθ(xt, t) (69) L]L](−f̄])(xt, t) = ( β3t 8 − 3βtβ̇t 4 + β̈t 2 ) xt − β3t + 4β̈t 4 √ νt Sθ(xt, t) L]G](−f̄])(xt, t) = √ βt ν2t ( νt(2β 2 t + 3β̇t) 2 − β2t − 3ν2t β̇t 4 ) G]L](−f̄])(xt, t) = √ βt ( β2t 4 − β̇t 2 + β̇t νt ) G]G](−f̄])(xt, t) = 0 L]L]g(t) = 2βtβ̈t − β̇2t 4β 3/2 t L]G]g(t) = 0 G]L]g(t) = 0 G]G]g(t) = 0. As we can see, no factors other than integers, xt, Sθ(xt, t), νt, βt and derivatives of βt appear. This is also true for higher order derivatives, which can be easily shown. SymPy Code Snippet for Automatic Symbolic Computation of Derivatives The following code snippet is a minimalistic example of SymPy code to compute the above derivatives using the customized derivative method. We used SymPy 1.11 to test the following code snippet. from sympy import Function, symbols, sqrt, simplify x, t = symbols(’x t’) # x, t B = Function(’beta’) # βt # define customized derivatives of νt class nu(Function): def fdiff(self, argindex=1): t, = self.args return (1 - nu(t)) * B(t) # ν̇t = (1− νt)βt # define customized derivatives of Sθ(x, t) class S_theta(Function): def fdiff(self, argindex=1): x, t = self.args if argindex == 1: # ∂/∂x d = 1 / sqrt(nu(t)) elif argindex == 2: # ∂/∂t d = (x - S_theta(x, t)/sqrt(nu(t))) * B(t) / (2 * sqrt(nu(t))) return d # define f̄[ class f_flat(Function): @classmethod def eval(cls, x, t): return - B(t) * x / 2 + S_theta(x, t) * B(t) / (2 * sqrt(nu(t))) # define differential operator L[ class L_flat(Function): @classmethod def eval(cls, fxt): return -fxt.diff(t) - f_flat(x, t) * fxt.diff(x) # show each derivative print(f_flat(x, t)) print(simplify(L_flat(f_flat(x,t)))) # L[ f̄[(xt, t); see eq. (64) print(simplify(L_flat(L_flat(f_flat(x,t))))) # L[L[ f̄[(xt, t); see eq. (69), # we can similarly define f̄], L], G] and compute other derivatives. The result will look like [Out 1] − xβ(t) 2 + Sθ(x, t)β(t) 2 √ ν(t) [Out 2] − xβ 2(t) 4 + x ddtβ(t) 2 + Sθ(x, t)β 2(t) 4ν 3 2 (t) − Sθ(x, t) d dtβ(t) 2 √ ν(t) [Out 3] − xβ 3(t) 8 + 3xβ(t) ddtβ(t) 4 − x d2 dt2 β(t) 2 + Sθ(x, t)β 3(t) 8 √ ν(t) − 3Sθ(x, t)β 3(t) 8ν 3 2 (t) + 3Sθ(x, t)β 3(t) 8ν 5 2 (t) − 3Sθ(x, t)β(t) d dtβ(t) 4ν 3 2 (t) + Sθ(x, t) d2 dt2 β(t) 2 √ ν(t) and so on. Some additional coding techniques can further improve the readability of these expressions, but there will be no need to go any deeper into such subsidiary issues here. Thus obtained symbolic expressions can be automatically converted into executable code in practical programming languages including Python and C++ using a code generator, though the authors hand-coded the obtained expressions in Python for the experiments in this paper. C TRUNCATED DDIM IS EQUIVALENT TO THE QUASI-TAYLOR SAMPLER Using SymPy, we can easily compute the Taylor expansion of a given function. For example, the following code sympy.series(B(t+h), h, 0, 4) yields the result like β(t) + h d dξ1 β(ξ1) ∣∣∣∣ ξ1=t + h2 d 2 dξ21 β(ξ1) ∣∣∣ ξ1=t 2 + h3 d 3 dξ31 β(ξ1) ∣∣∣ ξ1=t 6 +O ( h4 ) . Similarly, using the relation ν̇t = (1− νt)βt, we can easily compute the Taylor expansion of νt−h as follows. sympy.series(nu(t-h), h, 0, 3) νt−h = ν(t)+h (β(t)ν(t)− β(t))+h2 β2(t)ν(t) 2 − β 2(t) 2 − ν(t) ddξ1 β(ξ1) ∣∣∣ ξ1=t 2 + d dξ1 β(ξ1) ∣∣∣ ξ1=t 2 +O (h3) Using this functionality of SymPy, we can easily compute the Taylor expansion of the DDIM (Song et al., 2020a). Let us recall that the DDIM algorithm is given by eq. (15), and using our notation α = √ 1− ν and σ = √ν, it can be written as follows, DDIM: xt−h ← √ 1− νt−h 1− νt︸ ︷︷ ︸ =:ρDDIMt,h xt + (√ νt−h − √ 1− νt−h 1− νt νt ) ︸ ︷︷ ︸ =:µDDIMt,h Sθ(xt, t). Then using SymPy, the Taylor expansion of ρDDIMt,h and µ DDIM t,h are computed as follows, ρDDIMt,h = 1 + βt 2 h− h 2 4 ( β2t 2 − β̇t ) + h3 4 ( β3t 12 − βtβ̇t 2 + β̈t 3 ) + o(h3), (70) √ νtµ DDIM t,h = − βt 2 h+ h2 4 ( β̇t − β2t 2νt ) + h3 4 ( −β 3 t 12 + β3t 4νt − β 3 t 4ν2t + βtβ̇t 2νt − β̈t 3 ) + o(h3). (71) Although it has been known that DDIM corresponds to the Euler method up to 1st order terms (Song et al., 2020a; Salimans & Ho, 2022), this expansion gives better understanding of higher order terms. That is, these are exactly equivalent to our deterministic Quasi-Taylor sampler eq. (23) and eq. (24) up to 3rd-order terms. This fact may suggest that the assumptions behind the DDIM derivation will be logically equivalent to our assumptions of ideal derivatives. The advantage of the proposed Quasi-Taylor method is that we can decide the hyperparameter at which order the Taylor expansion is truncated. On the other hand, DDIM automatically incorporates terms of much higher order, leaving no room for order tuning. D ON THE NOISE SCHEDULE D.1 BACKGROUND: PICARD-LINDELÖF THEOREM Let us consider a 1-dim deterministic system ẋ(t) = a(x(t), t). It is well known that this ODE has a unique solution if a(x, t) is Lipschitz continuous w.r.t. x and continuous w.r.t. t (Picard-Lindelöf Theorem). Otherwise, ODEs often behave less favorably. (Similar Lipschitz conditions are also required for SDEs.) Example 1. For example, the ODE ẋ = x2, x(0) = 1 has the solution x = 1/(1− t) when t < 1, and it blows up at t = 1. It is usually impossible to consider what happens after t > 1 in ordinary contexts. Example 2. Another well-known example is ẋ = √ x, x(0) = 0. It has a solution x = t2/4, but x ≡ 0 is also a solution. It actually has infinitely many solutions x = 0 (if t ≤ t0), x = (t− t0)2/4 (if t > t0), where t0 ≥ 0 is an arbitrary constant. Example 3. Let us consider the following ODE ẋ = − t− 1 1− e−(t−1)2 x, x(0) = 1, (72) which is a simplified model of the Linear schedule eq. (76). The exact solution is as follows, x = √ e− 1√ e(t−1)2 − 1 , (73) which diverges at t = 1. In this case, a(x, t) = −x·(t−1)/(1−e−(t−1)2) is not Lipschitz continuous, as the Taylor expansion of the denominator is 1− e−(t−1)2 = (t− 1)2 +O((t− 1)4), and a(x, t) is approximately −x/(t− 1) near t = 1. In these cases, the coefficient a(·, ·) is not Lipschitz continuous. Even these seemingly simplest ODEs behave very complexly unless the coefficients are carefully designed. In PF-ODE, the Lipschitz condition is written as follows, Lip(f̄[) = ∣∣∣∣∂xt (βt2 xt − βt2√νtSθ(xt, t) )∣∣∣∣ <∞. (74) Using the ideal derivative of Sθ(xt, t), this condition translates as Lip(f̄[) = |βt(1− 1/νt)| = ∣∣∣∣ ν̇tνt ∣∣∣∣ <∞. (75) D.2 SPECIFIC SCHEDULES Including this point, the necessary conditions for a variance schedule νt will be summarized as follows. 1. ν0 ≈ 0 so that the initial density p(x0, 0) is close to the true data density. 2. νT ≈ 1 so that the terminal density p(xT , T ) is close to the Gaussian. 3. Sufficiently smooth so that βt = − ddt log(1− νt) is well defined. • In addition, βt should also be smooth so that the Taylor schemes can be used. 4. Monotonic (s < t =⇒ νs ≤ νt) to make βt non-negative. 5. Preferably, make the drift coefficient f̄[ Lipschitz continuous so that PF-ODE has a unique solution, i.e., Lip(f̄[) ≈ |ν̇t/νt| <∞. The following two scheduling functions which are common in diffusion generative models satisfy the conditions 1, 2, 4 above (the linear schedule also satisfies the 3rd condition), Linear: νt = 1− e−β0t−β1t 2 , βt = β0 + 2β1t, (76) Cosine: νt = 1− C cos2 ( π 2 t/T + ς 1 + ς ) , βt = { π T tan ( π 2 t/T+ς 1+ς ) if 0 ≤ t ≤ T ′ Θ if T ′ < t ≤ T . (77) where ς > 0 is a small constant, C = 1/ cos2(πς/2(1 + ς)) is a constant to make ν0 = 0, and the threshold constant is Θ = βT ′ . However, these common schedules do not satisfy the 5th condition that the drift coefficient f̄[ is Lipschitz continuous. Indeed, it is easily verified that limt→0 ν̇t/νt =∞ in both cases, since ν0 = 0 but ν̇0 > 0. Nevertheless, t = 0 is the only singular point, and since no function value or derivative at t = 0 is evaluated by numerical methods (except by the Runge-Kutta method), this point can practically be ignored. Note that, we can also consider some other schedule functions such as the sigmoid function and the hyperbolic tangent, which satisfy the condition 2, 3, 4, 5 but do not satisfy the 1st condition rigorously (but if ν0 is less than or equal to the level of the quantization error in the data, we may consider the first condition to be essentially satisfied), Sigmoid: νt = 1 1 + e−A(t−k) , βt = Aνt, (78) Modified Tanh: νt = tanh2(λ(t)/2), βt = λ̇(t) tanh(λ(t)/2), (79) where the parameter function λ(t) has some options, such as λ(t) = log(1 + Aekt), and A > 0, k > 0 are hyperparameters. D.3 HOW TO AVOID THE TIME ORIGIN SINGULARITY IN THE RUNGE-KUTTA METHODS When using the Heun and Classical RK4 methods, the function f̄[(xt, t) is evaluated at time t = 0. However, since the function f̄[(xt, t) contains the term proportional to 1/ √ νt, it will diverge at time t = 0 if the linear eq. (76) or cosine schedule eq. (77) is used. The simplest way to avoid this is to replace the function f̄[(x0, 0) with f̄[(xε, ε) where ε > 0 is a sufficiently small constant, only when the need to evaluate the function at time t = 0 arises. The same thing could happen at t = T if the cosine schedule and DDIM were used simultaneously, but this can be handled in the same way. If we use the sigmoid eq. (78) or modified tanh schedules, eq. (79) these problems do not occur unless the hyperparameters A and k are chosen to be very extreme values. E SUPPLEMENT ON FUNDAMENTALS For convenience, let us summarize some basics behind the ideas in this paper. The contents of this section are not particularly novel, but the authors expect that this section will give a better understanding of the ideas of this paper and the continuous-time approach to diffusion generative models. E.1 TAYLOR EXPANSION AND ITÔ-TAYLOR EXPANSION E.1.1 TAYLOR EXPANSION OF DETERMINISTIC SYSTEMS 1-dimensional case Let us first consider a 1-dim deterministic system ẋ(t) = a(x(t), t), where a(·, ·) is sufficiently smooth, and let us derive the Taylor series expression of the solution of this ODE. Let ϕ(x(t), t) be a differentiable function. Its total derivative is written as dϕ = ∂ϕ ∂t dt+ ∂ϕ ∂x dx = ∂ϕ ∂t dt+ ∂ϕ ∂x dx dt dt = ( ∂ϕ ∂t + ∂ϕ ∂x a(x, t) ) dt = ( ∂ ∂t + a(x, t) ∂ ∂x ) ︸ ︷︷ ︸ =:L[ ϕdt. (80) By integrating both sides from 0 to t, we have ϕ(x(t), t) = ϕ(x(0), 0) + ∫ t 0 (L[ϕ)(x(s), s)ds. (81) We use this formula recursively to obtain the Taylor series of the above system. Let ϕ(x(t), t) = x(t), then we have x(t) = x(0) + ∫ t 0 (L[x)(x(s), s)ds = x(0) + ∫ t 0 a(x(s), s)ds. (82) Let ϕ(x(t), t) = a(x(t), t), then we have a(x(t), t) = a(x(0), 0) + ∫ t 0 (L[a)(x(s), s)ds. (83) Using the above two
1. What is the focus and contribution of the paper regarding solving SDE and ODE? 2. What are the strengths and weaknesses of the proposed approach, particularly in its approximation and flexibility? 3. Do you have any concerns or suggestions regarding the experiments and related works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work proposes high order methods for solving both SDE and ODE, by approximating high order gradients in Taylar expansion. It seems that this method can be extended to arbitrary order by computing some constants. Experiments are conducted on CIFAR10 and CelebA 64. Strengths And Weaknesses Strength: This work proposes high order methods for solving both SDE and ODE. It seems that this method can be extended to arbitrary order without much extra cost. Weakness: The approximation in Eq.(19) and Eq.(21) looks rough. Is there any bound of the approximation error? It seems this method needs to use a specific noise schedule function in Eq.(28), which makes it less flexible. Would this method work on other noise schedule functions? There are only experiments on CIFAR10 and CelebA 64. Besides, the used diffusion model is also weak, which has a FID around 10 on CIFAR10. Missing related works on speeding up diffusion models, such as [1, 2, 3]. [1] Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models [2] Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models [3] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps Clarity, Quality, Novelty And Reproducibility The paper is written clear. Since the idea of approximating high order gradients exists, such as [3], this work is relatively less novel.
ICLR
Title Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates Abstract Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. N/A Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. 1 INTRODUCTION The main goal of machine learning is the ability to generalize from the given training data to unseen examples. However, in practice the achievable degree of generalization is limited. While in the ideal case an end-to-end system learns complex functions from minimum input, it is often necessary to introduce a certain amount of prior knowledge. Such prior knowledge operates as an inductive bias and therefore has a constraining effect on the hypothesis space, i.e., the set of all possible functions that can be learned by the learning algorithm (Mitchell, 1980). While this sounds counter-intuitive such a reduction of the hypothesis space may lead to better generalization properties in practice (Mitchell, 1980). Hence, instead of eliminating the bias to increase generalization (as suggested by Hessel et al. (2019)), a promising direction of research tries to identify and introduce the right form of it. We can achieve this by limiting the functions that can be expressed by the learning algorithm or by introducing bias to the learning algorithm itself. Simple examples include the choice for linear activations to only allow approximations of linear functions or to add a regularization term to the objective function. Similar to this, we can also improve generalization by training on different tasks (Baxter, 2000) from a task family at the same time or by introducing auxiliary tasks (Jaderberg et al., 2017). This is commonly known as multitask learning and has shown to not only improve generalization properties but also to be more sample-efficient (Baxter, 2000). Due to the limited availability of data for training we need a well-tuned inductive bias. Hence, such choices are crucial for the final real-world performance of any machine learning algorithm. While multitask learning is a great tool to improve generalization and to reduce the amount of samples that are necessary to learn a family of tasks it is still limited in its scalability. Both the amount of tasks that can be learned and the amount of data required to learn them are strongly limiting factors. Consider, for instance, a reinforcement learning setup where an agent learns different tasks from interacting with in an environment. In practice we are limited in storing the data for all relevant tasks required to train a model on all tasks jointly. However, learning those tasks sequentially is also not an option as gradient descent and its variants (which are the dominant learning approaches for neural networks) do not consider the importance of individual parameters for early tasks. This destructive learning is commonly termed as catastrophic forgetting (McCloskey & Cohen, 1989). While in the context of fine-tuning and pre-training (Erhan et al., 2009) this does not bear a problem (as the goal is not to reuse the previous parameter state, but rather to optimize the learning process for some target task) it becomes important in multitask problems where we wish to maximize generalization and sample-efficiency. It is also critical in the continual learning framework, where the parameters of a neural network are optimized over multiple datasets (representing different tasks) provided sequentially, which are not available at later time. The goal is hence to retain all (or most) of the important parameters for previous tasks and to be able to build-up on this knowledge for an arbitrary number of future tasks. Thus, the scalability of learning would only be limited by the capacity of the neural network but not by the properties of the training method. The Bayesian framework (Kirkpatrick et al., 2017; Ritter et al., 2018) is a promising approach to address catastrophic forgetting. The information about former tasks is condensed in a prior, which not only preserves the knowledge about tasks but also introduces an inductive bias based on the learned tasks. Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is a simple yet efficient way to reduce catastrophic forgetting. EWC approximates the prior with a Gaussian centered around the optimized network parameters for previous tasks, where the diagonal precision is given by the diagonal approximation of the Fisher Information Matrix (FIM). This approach has two significant downsides: i) each new task adds a new regularization term that penalizes changes of parameters that are relevant to previous tasks; and ii) the diagonal approximation of the FIM assumes independent network parameters, which leads to information loss with a growing number of tasks. Ritter et al. (2018) extend EWC but still approximate the prior from previous tasks using a Gaussian. They devise a block-diagonal approximation for the prior from the older tasks by defining a quadratic approximation whose solution requires to calculate the Hessian. The Hessian is in turn approximated by the block-diagonal Kronecker-factored approximation. In this work we propose an alternative way of calculating the Hessian, based on well established Hessian-free (Schraudolph, 2002; Pearlmutter, 1994) methods to estimate curvature information of the network parameters. In contrast to Ritter et al. (2018), we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the network parameters when we train the network over a long sequence of tasks. We evaluate our algorithm on permuted MNIST (Kirkpatrick et al., 2017), disjoint MNIST (Ritter et al., 2018) and single-headed disjoint MNIST (Farquhar & Gal, 2019), and compare with state of the art approaches. Our results show that we consistently outperform EWC across all tasks and that we are en par with Ritter et al. (2018) on the disjoint tasks, while our method has significantly lower space complexity compared to both EWC and Kronecker-factored approximation. The remainder of this paper is structured as follows. Section 2 provides background on continual learning, EWC, and Kronecker-factored Laplace approximation. Section 3 describes our method in detail. Section 4 shows the efficiency of our approach and compares it against state of the art on a variety of well-known task sequences. Section 5 discusses related work. Section 6 concludes. 2 BACKGROUND 2.1 CONTINUAL LEARNING AND CATASTROPHIC FORGETTING In the continual learning framework the parameters θ ∈ Rn of a neural network are optimized over multiple datasets D1, . . . ,Dt, . . . ,DT . These individual datasets become available to the training algorithm one after another and usually cannot be revisited at a later time. The goal is to achieve a high accuracy/performance on the current task (represented by the current dataset Dt) while still preserving (high or most of) the performance for all the previously visited tasks. However, this is usually challenging for neural network models as commonly used gradient-based optimization methods cannot distinguish between important and unimportant parameters for previous tasks. As a consequence parameters that are relevant for previous tasks are modified (heavily), which leads to performance degradation when the network is used on any of those previous tasks (Rusu et al., 2016). Hence, to address catastrophic forgetting in neural networks we need to retain the parameters that are important for previous tasks while still allowing the network to learn new tasks. However, at the same time we also want the space complexity of the network to be independent of the amount of tasks that were observed so far (and that are about to come). This means that learning a new task while retaining high performance on all prior tasks should be possible without adding new parameters or regularization terms for each new task, at least as long sufficient capacity is available. As a plus we want to foster some degree of parameter sharing to enable positive transfer effects, e.g., improved sample-efficiency due to the fact that past experience can be reused. 2.2 ELASTIC WEIGHT CONSOLIDATION (EWC) EWC (Kirkpatrick et al., 2017) is a simple yet efficient approach that meets most of the above mentioned requirements. The key idea is to add a penalty when parameters that are important for previous tasks are about to be changed while parameters that are less relevant for previous tasks do not receive a penalty. EWC uses a quadratic penalty term that is derived from a Bayesian formulation of the problem (where all the information of all previous tasks is condensed in the prior) as follows: p(θ|D1:t+1) = p(Dt+1|θ)p(θ|D1:t) p(Dt+1) , (1) where p(θ|D1:t+1) and p(θ|D1:t) are the posterior and prior distributions over the parameters θ of the network and D1, . . . ,Dt,Dt+1 are the datasets corresponding to the respective tasks. If we want to learn a new task we update the posterior by conditioning it on the newly available data Dt+1. However, we have to address two problems that stem from Equation 1. First, maintaining the full posterior over all previous datasets is usually intractable (Ritter et al., 2018; Opper & Winther, 1998) and we instead need to approximate it. Second, without storing the information from all previous tasks there is no easy solution to update the posterior. The first problem can be addressed by approximating the posterior with a Gaussian (MacKay, 1992): p(θ|D1:t) ∼ N (µt,Σt). (2) With two tasks A and B and their datasets DA and DB , for the posterior p(θ|DA) the mean µA is given by the solution for the previous task θ∗A, and the precision Σ −1 A , i.e., the inverse of the covariance, by the diagonal of the Fisher information matrix (FIM) F . Learning tasks A and B consecutively then results in the following objective function: L(θ) = LB(θ) + λ 2 (θ − θ∗A)TF (θ − θ∗A), (3) where LB(θ) is the loss depending on the current data DB , and λ is a hyperparameter that controls the influence of the regularization term. At this point we only need to store the previous weights and the diagonal approximation of the FIM for the previous task. For another task C we store a separate FIM for that new task together with the solution for task B θ∗B , and add another regularization term: L(θ) = LC(θ) + λ 2 (θ − θ∗A)TFA(θ − θ∗A) + λ 2 (θ − θ∗B)TFB(θ − θ∗B). (4) 2.3 KRONECKER-FACTORED LAPLACE APPROXIMATION The diagonal approximation of the FIM assumes the parameters to be independent, which is rarely the case in practice. Ritter et al. (2018) address this shortcoming by adopting the Bayesian online learning approach (Opper & Winther, 1998). As the prior p(θ|D1:t) preserves all the information about the previous tasks recursively using the previous posterior as the next prior makes it possible to find a MAP-estimate θ∗ = arg maxθ p(θ|D1, . . . ,Dt+1) sequentially. Due to the fact that the posterior conditioned on all previous tasks is intractable, a parameterization of the posterior p ( θ|Dt+1, w(t) ) with parameters w(t) is introduced. To update this parametric approximate posterior requires two steps: 1. Update Step: in an update step the old approximative posterior p(θ|w(t)) is used to perform an update using the Bayesian rule (see Ritter et al. (2018) for a detailed analysis): p(θ|Dt+1, w(t)) = p(Dt+1|θ)p(θ|w(t))∫ dθ′p(Dt+1|θ′)p(θ′ |w(t)) (5) 2. Projection Step: In a projection step the new posterior p(θ|Dt+1, w(t)) is projected onto the same parametric family as p ( θ|w(t) ) (as they are usually not from the same parametric family): q(θ|w(t+ 1)) ≈ p(θ|Dt+1, w(t)). (6) Similar to EWC the update step can be approximated by a Gaussian approximate posterior: L(θ) = Lt+1(θ) + 1 2 (θ − µt)TΣ−1t (θ − µt). (7) As before, the mean µt is given by the solution for the previous task θ∗t . Accordingly, the parameters w(t) are given by w(t) = {µt,Σ−1t }. The core improvement that this framework offers is encapsulated in the projection step: instead of adding a new regularization term for each new task, Σ−1t is instead projected to Σ−1t+1 which then maintains information about all tasks up to task t + 1. Ritter et al. (2018) realize this by computing the Hessian around the most recent solution θ∗t+1, and adding it to the Hessians from all previous solutions: Σ−1t+1 = Ht+1(θ ∗ t+1) + Σ −1 t , where Ht+1(θ ∗ t+1) = − ∂2p(Dt+1|θ) ∂θ∂θ ∣∣∣∣ θ=θ∗t+1 (8) This way information about previous tasks can be preserved while still limiting the storage requirements to a constant number of parameters. However, in practice this approach needs to store a set of parameters per task. 3 HESSIAN-FREE CURVATURE ESTIMATION Previous approaches identify the most important parameters for each previous task and then prevent the modification of those parameters during the training of a new task. EWC uses the diagonal of the FIM while Ritter et al. (2018) use a Hessian approximated using the block-diagonal Kroneckerfactored approximation. We address the same problem but approach it differently. We build upon the intuition of metalearning in general and from the model-agnostic meta learning (MAML) algorithm (Finn et al., 2017) in particular. MAML identifies model parameters that (upon modification) lead to faster learning for all tasks in a given task distribution. By defining a meta-learning objective and using available data for all tasks in the task distribution it learns network weights that will lead to faster learning and generalization in new tasks, if being used as a starting point for the optimization. In our case, apart from the fact that we assume no access to samples from previous tasks, we invert the intuition behind MAML: we identify model parameters that are sensitive to changes in each task but instead of tuning these parameters to be a good starting point for the fine-tuning of all tasks, we penalize large changes to them, as this will deteriorate the performance of previous tasks. In order to identify the important network parameters, i.e., parameters that upon being changed lead to a big change in the loss, we also use the Hessian matrix, but in contrast to the Kronecker-factored Laplace approximation we exploit the fact that most regions of the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as this subset already holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. In essence, we need to estimate directions with high curvature as at those points we find the important weights of the network. However, any computation involving the exact Hessian for larger networks is infeasible in practice. Hence, it is key to find a good approximation of the Hessian while still preserving enough curvature information to determine which parameters are crucial for the previous tasks. Fortunately, as most regions in the loss surface are flat it is sufficient to only extract information about the few regions that exhibit a high curvature. Thus, instead of computing the full Hessian we compute a Hessian-vector-product, which is similar to sampling the curvature in the direction of a given vector. There are two important questions to answer here: (i) how to efficiently calculate the Hessian-vector product, and (ii) how to chose a suitable vector/direction. An efficient Hessian-vector-product calculation was initially presented in Pearlmutter (1994) and has subsequently been used for several Hessian-free (also called truncated-Newton) optimization methods (Schraudolph, 2002; Martens, 2010). The key idea is that the Hessian is not calculated explicitly. Instead, for a given vector v the Hessian-vector-product Hv is directly computed using finite differences (Martens, 2010) at the cost of a forward- and a backward-pass through the network (e.g., using algorithms such as back-propagation). The Hessian-vector-product is then calculated by (see Pearlmutter (1994) for the implementation details): Hv = lim →0 ∇f(θ + v)−∇f(θ) = ∂ ∂ ∇f(θ + v) ∣∣∣∣ =0 (9) Given that the Hessian-vector-product can be computed as described above, the second question is how to choose the vector v that defines the direction in which we sample the curvature. Inspired by Stochastic Meta-Descent (Bray et al., 2004a;b), which uses the combination of the momentum and a Hessian-vector-product to estimate gradient directions with low curvature, our first choice to select the vector v is to use the momentum. In our case the momentum is calculated using the exponentially weighted moving average of the past gradients: vt+1 = αvt + (1− α)∇f(θ), (10) where α controls the discount of older observations. The momentum is a sensible choice for the vector as it holds information about the parameters that have been changed the most during the training. The assumption is then that exactly these parameters will be among the most important ones for the most recent task. As such, if the parameters for the previous task θ∗t−1 are at an optimum, any change to important parameters results in a performance drop. An alternative to the momentum is the eigenvector corresponding to the largest eigenvalue. This eigenvector represents the direction of highest curvature, and therefore by definition includes the most important parameters for the most recent task. A simple way to compute this eigenvector is to use the power method (Wilkinson, 1965), which entails computing a Hessian-vector-product. Both versions result in a vector which maintains critical information about second-order interactions. From this vector we construct a positive semidefinite matrix by placing its absolute values as the entries of a diagonal matrix. Let ht be the resulting vector of the Hessian-vector-product Hv for task t, then our curvature estimate Ct is given as: Ct = |ht,1| . . . |ht,n| , (11) with n the number of network parameters. The projection step then is defined as: Σ−1t = Ct + Σ −1 t−1, (12) and the final objective function for a new task t+ 1 as: L(θ) = Lt+1(θ) + λ 2 (θ − θ∗t )TΣ−1t (θ − θ∗t ) (13) Similar to Kirkpatrick et al. (2018) and Ritter et al. (2018) we add a hyperparameter λ to control the influence of the regularization term on the overall loss, i.e., that controls how to weigh the importance of the previous tasks over the most recent task. One of the main advantages of our approach is the low storage requirements. Following the analysis in Ritter et al. (2018), Kronecker-factor approximation approach requires that all Hessians for previous tasks are kept in memory and the same holds for EWC, as the diagonal approximation of the FIM for all previous tasks are required to learn each new task. Instead, our approach only needs to store two vectors with the same size as the network parameters independently of the size of the task sequence. 4 EXPERIMENTS In our experiments, we compare both of our Hessian-free curvature estimations (eigenvector and momentum) to closely related methods, i.e.. EWC (Kirkpatrick et al., 2017) and Kronecker-factored approximation (Ritter et al., 2018). For both EWC and Kronecker-factored approximation we adapt the implementation from https://github.com/hannakb/KFA. We release the source code of our methods upon publication. 4.1 PERMUTED MNIST For our first evaluation, we utilize the widely-used permutedMNIST dataset as presented in Goodfellow et al. (2013) and used in Kirkpatrick et al. (2017) and Ritter et al. (2018). The dataset contains 28× 28 grey-scale images, that are permuted randomly in order to generate new tasks. Each permutation is a truly new task, since it is unrecognizable from its original. For the evaluation, we perform a hyperparameter search with the following range of parameters: i) network structure: either 1 layer with 200 hidden units or 2 layers with 100 hidden units each; ii) λ ∈ [1, 2, 3, 10, 20, 30, 100, 300]. We use the ADAM optimizer with a learning rate of 0.001, a momentum of 0.5, and a batch size of 64 over 10 epochs. Figure 1 shows the mean average accuracy over all 50 tasks with the best hyperparameters discovered for each method. While Kronecker-factor approximation achieves 83.82%, Hessian-free curvature estimation achieves 62.58% and Hessian-free curvature estimation with the largest eigenvector achieves 61.63%, leading to better results compared to EWC (51.62%) for the last 15 tasks. Even though Kronecker-factored approximation achieves better performance compared to our approach, according to Farquhar & Gal (2019) in order to evaluate continual learning approaches other tasks can be more representative. In fact, Farquhar & Gal (2019) suggest to use a specific version of disjointMNIST which we evaluate below. 4.2 DISJOINTMNIST For an evaluation according to the DisjointMNIST (Ritter et al., 2018) we split MNIST into two tasks: (1) letters ’0’ to ’4’ and (2) letters ’5’ to ’9’. For this experiment we use a network with a ten-way classifier which makes the problem considerably more challenging than in the previous experiment where we used a five-way classifier. Hence, here the classifier learns a strong (bad) prior for the (respective) unseen classes in the datasets. It is more difficult as training on the second split can easily overwrite the parameters of the ten-way classifiers for the classes of the first split. We use a simple dense feed-forward network architecture with 2 layers and 100 hidden units in each layer as well as a batch size of 250 as reported in Ritter et al. (2018). We use 10 epochs and the same Adam parameters as in the PermutedMNIST experiment. This allows a comparison of our results against Kronecker-factored approximation and EWC. Following the same evaluation procedure from Ritter et al. (2018) Figure 2a illustrates the result of a hyperparameter search over λ ∈ [100, 101, . . . , 107] for EWC, Kronecker-factored approximation, and ours (i.e., Hessian-free curvature estimation using either the largest eigenvector or the momentum to estimate v). The results show the balancing of retaining information on old tasks over the learning accuracy on new tasks. Note that the different scales in λ between our results and that from Ritter et al. (2018) only stem from different implementation details (but the results are still comparable). Similar to the PermutedMNIST experiment, we see that our approach (using the momentum) outperforms EWC with 91.01% (at λ = 106) vs 86.11% (which is what we expected as EWC disregards parameter dependencies that are not reflected by the diagonal of the FIM). Surprisingly, our approach is even comparable to the Kronecker-factored approximation (which reaches 94.93%) although our method uses considerably less storage memory to store information on the importance of parameters. The use of the largest eigenvector on the other hand performs poorly compared to the other methods with 72.69% for λ = 106. 4.3 SINGLE-HEADED SPLIT MNIST For the Single-Headed-Split-MNIST task (Farquhar & Gal, 2019) the available digits are split into five groups (i.e., tasks) of two classes each. The classifier (as for the PermutedMNIST) uses ten outputs, i.e., one for each digit, and the network is trained on each task one after another. In contrast to some other work (Zenke et al., 2017) all the tasks share the classifier head instead of having multiple task-specific outputs. Hence, the predictions are made for all possible outputs, not only for the outputs of classes that belong the most recent task. We use the same network as in the previous experiments (i.e., 2 layers of 100 hidden units each) and a batch of 64. Figure 2b shows the results after a hyperparameter search over λ. As in the previous experiments we can observe that both of our Hessian-free curvature estimations consistently outperform EWC (Hessian-free with momentum achieves 57.54% and the eigenvector approach 55.36% while EWC reaches 46.73%) and that the momentum-based variant even comes again close to the Kronecker-factored approximation (which is at 57.2% at the end). 5 RELATED WORK Related work around the field of catastrophic forgetting is mainly driven by regularization methods, rehearsal methods, and dynamic architecture methods. Regularization Methods. Elastic Weight Consolidation (Kirkpatrick et al., 2017) measures the distance between the network weights for the current task and the weight state of previous tasks, and applies a quadratic penalty weighted by a diagonal approximation of the Fisher information matrix to ensure that the new weights are not too far from the old weights. EWC only penalizes important parameters while the parameters that have no influence on the performance of previous tasks are allowed to change freely. Similar approaches have been proposed by Aljundi et al. (2018) and Lee et al. (2017). The main difference is how the importance of parameters for previous tasks are approximated. However, all these approaches have limited performance as they do not consider interactions between the parameters. Instead of using the diagonal of the Fisher information matrix (Ritter et al., 2018) apply a Kronecker-factored approximation of the Hessian. This leads to strong improvements over EWC. This approach is most similar to ours, as it attempts to capture second-order parameter interactions to regularize parameter change. The main difference to our method is the usage of the Kronecker factorization to store the Hessian in a compact way while we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. Rehearsal Methods. Rehearsal methods attempt to reduce catastrophic forgetting by replaying examples of previous tasks when learning a new task. A first approach here is to not only learn the actual task at hand but also the distribution of the training data. When a new task is learned, artificial samples from this learned distribution are added to the current set of training data. Typically this is done by adding a Variational Autoencoder (Kamra et al., 2017). Recent approaches (Shin et al., 2017) also employ generative adversarial networks with promising results. A second, more direct approach preserves a subset of the training data for each task in an episodic memory and reuses it to constrain the learning process of future tasks (Lopez-Paz et al., 2017). However, while being effective in reducing catastrophic forgetting in general, both approaches have shortcomings as the inherent problem of catastrophic forgetting is simply shifted to a scalability problem. In generative approaches samples for all previous tasks must be replayed each time to preserve old parameter states and as the number of tasks increases this becomes problematic. Similarly for the direct approach, even if only a small subset of examples for each task is preserved, still we can end up with a large dataset as the number of tasks increases. Dynamic Architecture Methods. Another way to address catastrophic forgetting is to incrementally increase the capacity of the architecture. Approaches vary mainly in whether new capacity is added for each new task by default, or whether this is determined by a metric. Progressive Neural Networks (Rusu et al., 2016) add a new network for each new task and each new network is connected via lateral connections to the old ones to allow for transfer from previous tasks to the current one. This avoids catastrophic forgetting by design but as each new task requires a new network this approach does not scale well with the number of tasks. In contrast to Progressive Nets other approaches only add capacity when it is necessary. Part & Lemon (2016) present an approach based on Self-Organizing Map, which employs a similarity metric to determine whether a new node should be added to the network. Similar to this, Xiao et al. (2014) start out with a classifier with one super class and add new parameters, based on an error signal. Depending on the error made by the current model, only the final layer is extended by another output dimension, or a whole new sub-network is added as a subclass. Yoon et al. (2018) use the combination of sparsity and breadth-first-search to determine which parameters should be retrained for the current task. If the features learned so far are not able to represent the new task, more capacity is added dynamically (as in Xiao et al. (2014)). While these methods suffer significantly less from scalability issues, their main disadvantage lies in the fact that they have very stringent architectural constraints, which cannot be easily transferred to any arbitrary existing model. 6 CONCLUSION This paper addressed catastrophic forgetting within a continual learning framework where the ultimate goal lies in the identification of the network weights that are important to previously learned tasks. While previous work in this direction is either limited in the achievable accuracy (as it only considers the diagonal of the Fisher Information Matrix) or limited in number of tasks (as they need to store information that grows linearly with the number of tasks) we set out to provide a first approach that uses second-order parameter dependencies with constant space complexity. We exploit the fact that most regions in the loss surface are flat, which allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the parameters when we train the network over a long task sequence. We evaluated our algorithm on three widely used benchmarks and compared it with state of the art. Our results show that we consistently outperform EWC across all benchmarks and that we are better or at least en par with Kronecker-factor approximation, while our method at the same time requires significantly less memory.
1. What is the main problem addressed by the paper regarding neural network learning? 2. How do existing works, such as Elastic Weight Consolidation (EWC), attempt to alleviate the issue of "catastrophic forgetting"? 3. What is the proposed approach in the paper, and how does it differ from previous methods like EWC and Ritter et al. (2018)? 4. What is the advantage of the proposed approach, and how does it compare to other methods empirically? 5. What are some open questions or areas for further research related to the proposed approach, particularly regarding the choice of direction/vector?
Review
Review The paper focuses on alleviating the problem of "catastrophic forgetting", exhibited by neural networks learned with gradient-based algorithms over long sequence of tasks. In such learning scenarios, tuning of parameters over the new tasks lead to degradation of performance over the old tasks as the parameters important for the latter are overwritten. The gradient-based algorithms are unable to distinguish between the important and the not-so-important parameters of the old tasks. Hence, one direction of works, including the proposed one, aim at identifying the most important parameters for all the old tasks and discourage modifications on those parameters during the training of the new tasks. Existing works like Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) have proposed a Bayesian framework to lessen such forgetfulness by condensing the information of the previous tasks and supplying it as a prior for the new task. In such a framework, Ritter et al. (2018) propose a quadratic approximation of the prior which requires computing (an approximate block-diagonal Kronecker-factored) Hessian. The paper employs a recent result (Ghorbani et al., 2019) to argue that most regions of the loss surface are flat. Hence, computing the Hessian in only a few regions (which exhibit high curvature) should suffice. However, computing the exact Hessian for large networks is infeasible in practice. The paper, therefore, uses Hessian-vector-product (Schraudolph, 2002; Pearlmutter, 1994), which is similar to sampling the curvature in the direction of a given vector. The key advantage of the proposed approach is the low storage requirements. Regarding how to chose a suitable direction/vector, the paper suggests two choices: the momentum vector or the eigenvector corresponding to the largest eigenvalue (of the Hessian). The motivation behind the above choices, especially the former option, is unsatisfactory. Empirically, we observe that the momentum vector is a better option than the eigenvector. However, a (theoretical/empirical) deep-dive into why momentum vector is a good candidate should be done. Empirically, the proposed approach with momentum vector performs better than EWC but worse than Ritter et al. (2018). More discussion into the results (esp. Hv-momentum vs Hv-eigenvector) would have shed more light on the proposed approach.
ICLR
Title Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates Abstract Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. N/A Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. 1 INTRODUCTION The main goal of machine learning is the ability to generalize from the given training data to unseen examples. However, in practice the achievable degree of generalization is limited. While in the ideal case an end-to-end system learns complex functions from minimum input, it is often necessary to introduce a certain amount of prior knowledge. Such prior knowledge operates as an inductive bias and therefore has a constraining effect on the hypothesis space, i.e., the set of all possible functions that can be learned by the learning algorithm (Mitchell, 1980). While this sounds counter-intuitive such a reduction of the hypothesis space may lead to better generalization properties in practice (Mitchell, 1980). Hence, instead of eliminating the bias to increase generalization (as suggested by Hessel et al. (2019)), a promising direction of research tries to identify and introduce the right form of it. We can achieve this by limiting the functions that can be expressed by the learning algorithm or by introducing bias to the learning algorithm itself. Simple examples include the choice for linear activations to only allow approximations of linear functions or to add a regularization term to the objective function. Similar to this, we can also improve generalization by training on different tasks (Baxter, 2000) from a task family at the same time or by introducing auxiliary tasks (Jaderberg et al., 2017). This is commonly known as multitask learning and has shown to not only improve generalization properties but also to be more sample-efficient (Baxter, 2000). Due to the limited availability of data for training we need a well-tuned inductive bias. Hence, such choices are crucial for the final real-world performance of any machine learning algorithm. While multitask learning is a great tool to improve generalization and to reduce the amount of samples that are necessary to learn a family of tasks it is still limited in its scalability. Both the amount of tasks that can be learned and the amount of data required to learn them are strongly limiting factors. Consider, for instance, a reinforcement learning setup where an agent learns different tasks from interacting with in an environment. In practice we are limited in storing the data for all relevant tasks required to train a model on all tasks jointly. However, learning those tasks sequentially is also not an option as gradient descent and its variants (which are the dominant learning approaches for neural networks) do not consider the importance of individual parameters for early tasks. This destructive learning is commonly termed as catastrophic forgetting (McCloskey & Cohen, 1989). While in the context of fine-tuning and pre-training (Erhan et al., 2009) this does not bear a problem (as the goal is not to reuse the previous parameter state, but rather to optimize the learning process for some target task) it becomes important in multitask problems where we wish to maximize generalization and sample-efficiency. It is also critical in the continual learning framework, where the parameters of a neural network are optimized over multiple datasets (representing different tasks) provided sequentially, which are not available at later time. The goal is hence to retain all (or most) of the important parameters for previous tasks and to be able to build-up on this knowledge for an arbitrary number of future tasks. Thus, the scalability of learning would only be limited by the capacity of the neural network but not by the properties of the training method. The Bayesian framework (Kirkpatrick et al., 2017; Ritter et al., 2018) is a promising approach to address catastrophic forgetting. The information about former tasks is condensed in a prior, which not only preserves the knowledge about tasks but also introduces an inductive bias based on the learned tasks. Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is a simple yet efficient way to reduce catastrophic forgetting. EWC approximates the prior with a Gaussian centered around the optimized network parameters for previous tasks, where the diagonal precision is given by the diagonal approximation of the Fisher Information Matrix (FIM). This approach has two significant downsides: i) each new task adds a new regularization term that penalizes changes of parameters that are relevant to previous tasks; and ii) the diagonal approximation of the FIM assumes independent network parameters, which leads to information loss with a growing number of tasks. Ritter et al. (2018) extend EWC but still approximate the prior from previous tasks using a Gaussian. They devise a block-diagonal approximation for the prior from the older tasks by defining a quadratic approximation whose solution requires to calculate the Hessian. The Hessian is in turn approximated by the block-diagonal Kronecker-factored approximation. In this work we propose an alternative way of calculating the Hessian, based on well established Hessian-free (Schraudolph, 2002; Pearlmutter, 1994) methods to estimate curvature information of the network parameters. In contrast to Ritter et al. (2018), we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the network parameters when we train the network over a long sequence of tasks. We evaluate our algorithm on permuted MNIST (Kirkpatrick et al., 2017), disjoint MNIST (Ritter et al., 2018) and single-headed disjoint MNIST (Farquhar & Gal, 2019), and compare with state of the art approaches. Our results show that we consistently outperform EWC across all tasks and that we are en par with Ritter et al. (2018) on the disjoint tasks, while our method has significantly lower space complexity compared to both EWC and Kronecker-factored approximation. The remainder of this paper is structured as follows. Section 2 provides background on continual learning, EWC, and Kronecker-factored Laplace approximation. Section 3 describes our method in detail. Section 4 shows the efficiency of our approach and compares it against state of the art on a variety of well-known task sequences. Section 5 discusses related work. Section 6 concludes. 2 BACKGROUND 2.1 CONTINUAL LEARNING AND CATASTROPHIC FORGETTING In the continual learning framework the parameters θ ∈ Rn of a neural network are optimized over multiple datasets D1, . . . ,Dt, . . . ,DT . These individual datasets become available to the training algorithm one after another and usually cannot be revisited at a later time. The goal is to achieve a high accuracy/performance on the current task (represented by the current dataset Dt) while still preserving (high or most of) the performance for all the previously visited tasks. However, this is usually challenging for neural network models as commonly used gradient-based optimization methods cannot distinguish between important and unimportant parameters for previous tasks. As a consequence parameters that are relevant for previous tasks are modified (heavily), which leads to performance degradation when the network is used on any of those previous tasks (Rusu et al., 2016). Hence, to address catastrophic forgetting in neural networks we need to retain the parameters that are important for previous tasks while still allowing the network to learn new tasks. However, at the same time we also want the space complexity of the network to be independent of the amount of tasks that were observed so far (and that are about to come). This means that learning a new task while retaining high performance on all prior tasks should be possible without adding new parameters or regularization terms for each new task, at least as long sufficient capacity is available. As a plus we want to foster some degree of parameter sharing to enable positive transfer effects, e.g., improved sample-efficiency due to the fact that past experience can be reused. 2.2 ELASTIC WEIGHT CONSOLIDATION (EWC) EWC (Kirkpatrick et al., 2017) is a simple yet efficient approach that meets most of the above mentioned requirements. The key idea is to add a penalty when parameters that are important for previous tasks are about to be changed while parameters that are less relevant for previous tasks do not receive a penalty. EWC uses a quadratic penalty term that is derived from a Bayesian formulation of the problem (where all the information of all previous tasks is condensed in the prior) as follows: p(θ|D1:t+1) = p(Dt+1|θ)p(θ|D1:t) p(Dt+1) , (1) where p(θ|D1:t+1) and p(θ|D1:t) are the posterior and prior distributions over the parameters θ of the network and D1, . . . ,Dt,Dt+1 are the datasets corresponding to the respective tasks. If we want to learn a new task we update the posterior by conditioning it on the newly available data Dt+1. However, we have to address two problems that stem from Equation 1. First, maintaining the full posterior over all previous datasets is usually intractable (Ritter et al., 2018; Opper & Winther, 1998) and we instead need to approximate it. Second, without storing the information from all previous tasks there is no easy solution to update the posterior. The first problem can be addressed by approximating the posterior with a Gaussian (MacKay, 1992): p(θ|D1:t) ∼ N (µt,Σt). (2) With two tasks A and B and their datasets DA and DB , for the posterior p(θ|DA) the mean µA is given by the solution for the previous task θ∗A, and the precision Σ −1 A , i.e., the inverse of the covariance, by the diagonal of the Fisher information matrix (FIM) F . Learning tasks A and B consecutively then results in the following objective function: L(θ) = LB(θ) + λ 2 (θ − θ∗A)TF (θ − θ∗A), (3) where LB(θ) is the loss depending on the current data DB , and λ is a hyperparameter that controls the influence of the regularization term. At this point we only need to store the previous weights and the diagonal approximation of the FIM for the previous task. For another task C we store a separate FIM for that new task together with the solution for task B θ∗B , and add another regularization term: L(θ) = LC(θ) + λ 2 (θ − θ∗A)TFA(θ − θ∗A) + λ 2 (θ − θ∗B)TFB(θ − θ∗B). (4) 2.3 KRONECKER-FACTORED LAPLACE APPROXIMATION The diagonal approximation of the FIM assumes the parameters to be independent, which is rarely the case in practice. Ritter et al. (2018) address this shortcoming by adopting the Bayesian online learning approach (Opper & Winther, 1998). As the prior p(θ|D1:t) preserves all the information about the previous tasks recursively using the previous posterior as the next prior makes it possible to find a MAP-estimate θ∗ = arg maxθ p(θ|D1, . . . ,Dt+1) sequentially. Due to the fact that the posterior conditioned on all previous tasks is intractable, a parameterization of the posterior p ( θ|Dt+1, w(t) ) with parameters w(t) is introduced. To update this parametric approximate posterior requires two steps: 1. Update Step: in an update step the old approximative posterior p(θ|w(t)) is used to perform an update using the Bayesian rule (see Ritter et al. (2018) for a detailed analysis): p(θ|Dt+1, w(t)) = p(Dt+1|θ)p(θ|w(t))∫ dθ′p(Dt+1|θ′)p(θ′ |w(t)) (5) 2. Projection Step: In a projection step the new posterior p(θ|Dt+1, w(t)) is projected onto the same parametric family as p ( θ|w(t) ) (as they are usually not from the same parametric family): q(θ|w(t+ 1)) ≈ p(θ|Dt+1, w(t)). (6) Similar to EWC the update step can be approximated by a Gaussian approximate posterior: L(θ) = Lt+1(θ) + 1 2 (θ − µt)TΣ−1t (θ − µt). (7) As before, the mean µt is given by the solution for the previous task θ∗t . Accordingly, the parameters w(t) are given by w(t) = {µt,Σ−1t }. The core improvement that this framework offers is encapsulated in the projection step: instead of adding a new regularization term for each new task, Σ−1t is instead projected to Σ−1t+1 which then maintains information about all tasks up to task t + 1. Ritter et al. (2018) realize this by computing the Hessian around the most recent solution θ∗t+1, and adding it to the Hessians from all previous solutions: Σ−1t+1 = Ht+1(θ ∗ t+1) + Σ −1 t , where Ht+1(θ ∗ t+1) = − ∂2p(Dt+1|θ) ∂θ∂θ ∣∣∣∣ θ=θ∗t+1 (8) This way information about previous tasks can be preserved while still limiting the storage requirements to a constant number of parameters. However, in practice this approach needs to store a set of parameters per task. 3 HESSIAN-FREE CURVATURE ESTIMATION Previous approaches identify the most important parameters for each previous task and then prevent the modification of those parameters during the training of a new task. EWC uses the diagonal of the FIM while Ritter et al. (2018) use a Hessian approximated using the block-diagonal Kroneckerfactored approximation. We address the same problem but approach it differently. We build upon the intuition of metalearning in general and from the model-agnostic meta learning (MAML) algorithm (Finn et al., 2017) in particular. MAML identifies model parameters that (upon modification) lead to faster learning for all tasks in a given task distribution. By defining a meta-learning objective and using available data for all tasks in the task distribution it learns network weights that will lead to faster learning and generalization in new tasks, if being used as a starting point for the optimization. In our case, apart from the fact that we assume no access to samples from previous tasks, we invert the intuition behind MAML: we identify model parameters that are sensitive to changes in each task but instead of tuning these parameters to be a good starting point for the fine-tuning of all tasks, we penalize large changes to them, as this will deteriorate the performance of previous tasks. In order to identify the important network parameters, i.e., parameters that upon being changed lead to a big change in the loss, we also use the Hessian matrix, but in contrast to the Kronecker-factored Laplace approximation we exploit the fact that most regions of the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as this subset already holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. In essence, we need to estimate directions with high curvature as at those points we find the important weights of the network. However, any computation involving the exact Hessian for larger networks is infeasible in practice. Hence, it is key to find a good approximation of the Hessian while still preserving enough curvature information to determine which parameters are crucial for the previous tasks. Fortunately, as most regions in the loss surface are flat it is sufficient to only extract information about the few regions that exhibit a high curvature. Thus, instead of computing the full Hessian we compute a Hessian-vector-product, which is similar to sampling the curvature in the direction of a given vector. There are two important questions to answer here: (i) how to efficiently calculate the Hessian-vector product, and (ii) how to chose a suitable vector/direction. An efficient Hessian-vector-product calculation was initially presented in Pearlmutter (1994) and has subsequently been used for several Hessian-free (also called truncated-Newton) optimization methods (Schraudolph, 2002; Martens, 2010). The key idea is that the Hessian is not calculated explicitly. Instead, for a given vector v the Hessian-vector-product Hv is directly computed using finite differences (Martens, 2010) at the cost of a forward- and a backward-pass through the network (e.g., using algorithms such as back-propagation). The Hessian-vector-product is then calculated by (see Pearlmutter (1994) for the implementation details): Hv = lim →0 ∇f(θ + v)−∇f(θ) = ∂ ∂ ∇f(θ + v) ∣∣∣∣ =0 (9) Given that the Hessian-vector-product can be computed as described above, the second question is how to choose the vector v that defines the direction in which we sample the curvature. Inspired by Stochastic Meta-Descent (Bray et al., 2004a;b), which uses the combination of the momentum and a Hessian-vector-product to estimate gradient directions with low curvature, our first choice to select the vector v is to use the momentum. In our case the momentum is calculated using the exponentially weighted moving average of the past gradients: vt+1 = αvt + (1− α)∇f(θ), (10) where α controls the discount of older observations. The momentum is a sensible choice for the vector as it holds information about the parameters that have been changed the most during the training. The assumption is then that exactly these parameters will be among the most important ones for the most recent task. As such, if the parameters for the previous task θ∗t−1 are at an optimum, any change to important parameters results in a performance drop. An alternative to the momentum is the eigenvector corresponding to the largest eigenvalue. This eigenvector represents the direction of highest curvature, and therefore by definition includes the most important parameters for the most recent task. A simple way to compute this eigenvector is to use the power method (Wilkinson, 1965), which entails computing a Hessian-vector-product. Both versions result in a vector which maintains critical information about second-order interactions. From this vector we construct a positive semidefinite matrix by placing its absolute values as the entries of a diagonal matrix. Let ht be the resulting vector of the Hessian-vector-product Hv for task t, then our curvature estimate Ct is given as: Ct = |ht,1| . . . |ht,n| , (11) with n the number of network parameters. The projection step then is defined as: Σ−1t = Ct + Σ −1 t−1, (12) and the final objective function for a new task t+ 1 as: L(θ) = Lt+1(θ) + λ 2 (θ − θ∗t )TΣ−1t (θ − θ∗t ) (13) Similar to Kirkpatrick et al. (2018) and Ritter et al. (2018) we add a hyperparameter λ to control the influence of the regularization term on the overall loss, i.e., that controls how to weigh the importance of the previous tasks over the most recent task. One of the main advantages of our approach is the low storage requirements. Following the analysis in Ritter et al. (2018), Kronecker-factor approximation approach requires that all Hessians for previous tasks are kept in memory and the same holds for EWC, as the diagonal approximation of the FIM for all previous tasks are required to learn each new task. Instead, our approach only needs to store two vectors with the same size as the network parameters independently of the size of the task sequence. 4 EXPERIMENTS In our experiments, we compare both of our Hessian-free curvature estimations (eigenvector and momentum) to closely related methods, i.e.. EWC (Kirkpatrick et al., 2017) and Kronecker-factored approximation (Ritter et al., 2018). For both EWC and Kronecker-factored approximation we adapt the implementation from https://github.com/hannakb/KFA. We release the source code of our methods upon publication. 4.1 PERMUTED MNIST For our first evaluation, we utilize the widely-used permutedMNIST dataset as presented in Goodfellow et al. (2013) and used in Kirkpatrick et al. (2017) and Ritter et al. (2018). The dataset contains 28× 28 grey-scale images, that are permuted randomly in order to generate new tasks. Each permutation is a truly new task, since it is unrecognizable from its original. For the evaluation, we perform a hyperparameter search with the following range of parameters: i) network structure: either 1 layer with 200 hidden units or 2 layers with 100 hidden units each; ii) λ ∈ [1, 2, 3, 10, 20, 30, 100, 300]. We use the ADAM optimizer with a learning rate of 0.001, a momentum of 0.5, and a batch size of 64 over 10 epochs. Figure 1 shows the mean average accuracy over all 50 tasks with the best hyperparameters discovered for each method. While Kronecker-factor approximation achieves 83.82%, Hessian-free curvature estimation achieves 62.58% and Hessian-free curvature estimation with the largest eigenvector achieves 61.63%, leading to better results compared to EWC (51.62%) for the last 15 tasks. Even though Kronecker-factored approximation achieves better performance compared to our approach, according to Farquhar & Gal (2019) in order to evaluate continual learning approaches other tasks can be more representative. In fact, Farquhar & Gal (2019) suggest to use a specific version of disjointMNIST which we evaluate below. 4.2 DISJOINTMNIST For an evaluation according to the DisjointMNIST (Ritter et al., 2018) we split MNIST into two tasks: (1) letters ’0’ to ’4’ and (2) letters ’5’ to ’9’. For this experiment we use a network with a ten-way classifier which makes the problem considerably more challenging than in the previous experiment where we used a five-way classifier. Hence, here the classifier learns a strong (bad) prior for the (respective) unseen classes in the datasets. It is more difficult as training on the second split can easily overwrite the parameters of the ten-way classifiers for the classes of the first split. We use a simple dense feed-forward network architecture with 2 layers and 100 hidden units in each layer as well as a batch size of 250 as reported in Ritter et al. (2018). We use 10 epochs and the same Adam parameters as in the PermutedMNIST experiment. This allows a comparison of our results against Kronecker-factored approximation and EWC. Following the same evaluation procedure from Ritter et al. (2018) Figure 2a illustrates the result of a hyperparameter search over λ ∈ [100, 101, . . . , 107] for EWC, Kronecker-factored approximation, and ours (i.e., Hessian-free curvature estimation using either the largest eigenvector or the momentum to estimate v). The results show the balancing of retaining information on old tasks over the learning accuracy on new tasks. Note that the different scales in λ between our results and that from Ritter et al. (2018) only stem from different implementation details (but the results are still comparable). Similar to the PermutedMNIST experiment, we see that our approach (using the momentum) outperforms EWC with 91.01% (at λ = 106) vs 86.11% (which is what we expected as EWC disregards parameter dependencies that are not reflected by the diagonal of the FIM). Surprisingly, our approach is even comparable to the Kronecker-factored approximation (which reaches 94.93%) although our method uses considerably less storage memory to store information on the importance of parameters. The use of the largest eigenvector on the other hand performs poorly compared to the other methods with 72.69% for λ = 106. 4.3 SINGLE-HEADED SPLIT MNIST For the Single-Headed-Split-MNIST task (Farquhar & Gal, 2019) the available digits are split into five groups (i.e., tasks) of two classes each. The classifier (as for the PermutedMNIST) uses ten outputs, i.e., one for each digit, and the network is trained on each task one after another. In contrast to some other work (Zenke et al., 2017) all the tasks share the classifier head instead of having multiple task-specific outputs. Hence, the predictions are made for all possible outputs, not only for the outputs of classes that belong the most recent task. We use the same network as in the previous experiments (i.e., 2 layers of 100 hidden units each) and a batch of 64. Figure 2b shows the results after a hyperparameter search over λ. As in the previous experiments we can observe that both of our Hessian-free curvature estimations consistently outperform EWC (Hessian-free with momentum achieves 57.54% and the eigenvector approach 55.36% while EWC reaches 46.73%) and that the momentum-based variant even comes again close to the Kronecker-factored approximation (which is at 57.2% at the end). 5 RELATED WORK Related work around the field of catastrophic forgetting is mainly driven by regularization methods, rehearsal methods, and dynamic architecture methods. Regularization Methods. Elastic Weight Consolidation (Kirkpatrick et al., 2017) measures the distance between the network weights for the current task and the weight state of previous tasks, and applies a quadratic penalty weighted by a diagonal approximation of the Fisher information matrix to ensure that the new weights are not too far from the old weights. EWC only penalizes important parameters while the parameters that have no influence on the performance of previous tasks are allowed to change freely. Similar approaches have been proposed by Aljundi et al. (2018) and Lee et al. (2017). The main difference is how the importance of parameters for previous tasks are approximated. However, all these approaches have limited performance as they do not consider interactions between the parameters. Instead of using the diagonal of the Fisher information matrix (Ritter et al., 2018) apply a Kronecker-factored approximation of the Hessian. This leads to strong improvements over EWC. This approach is most similar to ours, as it attempts to capture second-order parameter interactions to regularize parameter change. The main difference to our method is the usage of the Kronecker factorization to store the Hessian in a compact way while we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. Rehearsal Methods. Rehearsal methods attempt to reduce catastrophic forgetting by replaying examples of previous tasks when learning a new task. A first approach here is to not only learn the actual task at hand but also the distribution of the training data. When a new task is learned, artificial samples from this learned distribution are added to the current set of training data. Typically this is done by adding a Variational Autoencoder (Kamra et al., 2017). Recent approaches (Shin et al., 2017) also employ generative adversarial networks with promising results. A second, more direct approach preserves a subset of the training data for each task in an episodic memory and reuses it to constrain the learning process of future tasks (Lopez-Paz et al., 2017). However, while being effective in reducing catastrophic forgetting in general, both approaches have shortcomings as the inherent problem of catastrophic forgetting is simply shifted to a scalability problem. In generative approaches samples for all previous tasks must be replayed each time to preserve old parameter states and as the number of tasks increases this becomes problematic. Similarly for the direct approach, even if only a small subset of examples for each task is preserved, still we can end up with a large dataset as the number of tasks increases. Dynamic Architecture Methods. Another way to address catastrophic forgetting is to incrementally increase the capacity of the architecture. Approaches vary mainly in whether new capacity is added for each new task by default, or whether this is determined by a metric. Progressive Neural Networks (Rusu et al., 2016) add a new network for each new task and each new network is connected via lateral connections to the old ones to allow for transfer from previous tasks to the current one. This avoids catastrophic forgetting by design but as each new task requires a new network this approach does not scale well with the number of tasks. In contrast to Progressive Nets other approaches only add capacity when it is necessary. Part & Lemon (2016) present an approach based on Self-Organizing Map, which employs a similarity metric to determine whether a new node should be added to the network. Similar to this, Xiao et al. (2014) start out with a classifier with one super class and add new parameters, based on an error signal. Depending on the error made by the current model, only the final layer is extended by another output dimension, or a whole new sub-network is added as a subclass. Yoon et al. (2018) use the combination of sparsity and breadth-first-search to determine which parameters should be retrained for the current task. If the features learned so far are not able to represent the new task, more capacity is added dynamically (as in Xiao et al. (2014)). While these methods suffer significantly less from scalability issues, their main disadvantage lies in the fact that they have very stringent architectural constraints, which cannot be easily transferred to any arbitrary existing model. 6 CONCLUSION This paper addressed catastrophic forgetting within a continual learning framework where the ultimate goal lies in the identification of the network weights that are important to previously learned tasks. While previous work in this direction is either limited in the achievable accuracy (as it only considers the diagonal of the Fisher Information Matrix) or limited in number of tasks (as they need to store information that grows linearly with the number of tasks) we set out to provide a first approach that uses second-order parameter dependencies with constant space complexity. We exploit the fact that most regions in the loss surface are flat, which allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the parameters when we train the network over a long task sequence. We evaluated our algorithm on three widely used benchmarks and compared it with state of the art. Our results show that we consistently outperform EWC across all benchmarks and that we are better or at least en par with Kronecker-factor approximation, while our method at the same time requires significantly less memory.
1. What is the main contribution of the paper regarding neural network training in continual learning? 2. What are the strengths and weaknesses of the proposed approach compared to existing methods such as EWC and Kronecker-factored online Laplace? 3. How does the paper address the issue of calculating the Hessian in continual learning, and how effective is this solution? 4. What are some concerns regarding the presentation and explanation of the paper's content, particularly in the introduction and prior work section? 5. How does the paper connect its ideas with MAML, and what are the differences between continual learning and meta-learning settings? 6. Can you provide any suggestions for improving the performance of the proposed method and making it more competitive with other recent approaches?
Review
Review 1. Summary: The paper considers neural network training in the continual learning setting -- data arrive sequentially and we can not revisit past data. The paper proposes an approximate Laplace’s method, in which the Hessian the log likelihood of the data is approximated by some form of Hessian-vector project (? - I will get to this question mark below). The paper considers some benchmark continual learning datasets and compares the proposed approach to EWC and Kronecker-factored online Laplace. The performance of the proposed approach is similar to that of EWC and worse than Kronecker-factored Laplace in most cases. Another sales pitch that the paper brings up a lot is the low space complexity, however this benefit has not been fully demonstrated, given the small-scale network/experiments. 2. Opinion and rationales I’m leaning towards “strong reject” as I think the presentation needs another round of polishing and that the technical contributions need to be clarified / unpacked. I explain my thinking below. a. The presentation/explanation/flow are not clear. The abstract does not read well. For example: “This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian.” This sentence makes it sound like current approaches are tractable, so what this paper is trying to address? The technical summary is also not precise, the Hessian-free methods used in the paper is to compute Hessian-vector products, not the actual Hessian. The introduction motivates the continual learning problem using generalisation of neural networks leading to the need for multi-task learning; however multi-task learning is not scalable given the large number of tasks and thus we need to learn sequentially. However, I find this motivation not clear: if multi-task learning and its scalability issue are the reasons why we need continual learning, with the scale of the experiments considered in the paper, wouldn’t it always more beneficial to use multi-task learning instead of continual learning? The prior work section is also not clear, in my opinion. The paper starts out by describing EWC as Bayesian updates and cites MacKay (1992), then talks about the Kronecker-factored Laplace approximation as “address this shortcoming by adopting the Bayesian online learning approach”, as if these methods are very different while in fact, these methods are some variants of the Laplace approximation, with different ways to approximate the Hessian. The issues described in section 2.2 “two problems that stem from eq 1” are not very clear, for example, “without storing the information from all previous tasks there is no easy solution to update the posterior” (?). I would follow the presentation/explanation in Ritter et al (2018), Huszar (2018) [a note on the quadratic penalty of EWC] and section 5 of the variational continual learning paper (Nguyen et al 2018) to provide a more succinct connection between these methods. The connections between this work and MAML in section 3 is not clear to me. The continual learning and meta learning settings are also quite different. b. The technical contribution is not clear and if correct, if of limited novelty. What is not clear from reading section 3 is what quantity is being approximated, at what point a Hessian-vector product appears and thus we can use Hessian-free methods to approximate it. The paper talks about flat loss surface and sampling a small subset of the Hessian -- I’m not sure I understand these connections. In eq 11, the paper replaces the Hessian values with results of the Hessian-vector-product approximations -- this seems very odd to me, especially in terms of semantics and units, Hessian and hessian-vector-products are two very different things. Again, it is perhaps just me not understanding what is being approximated in the first place. The technical contribution of this paper is thus limited: using Hessian-free methods to approximate Hessian-vector products in the continual learning context. c. The performance of the proposed method is not super exciting. Pragmatically speaking, it is not clear why practitioners should be using this in the near future given Kronecker-factored Laplace works and scales well in practice and there are a plethora of other recent methods (e.g. VCL) that are also developed from the Bayesian principle and work much better than EWC. 3. Minor details: a. In eq 1, the denominator should be p(D_{t+1} | D_{1:t}). b. Figs 1 and 2, I would use the same colour scheme throughout to be consistent.
ICLR
Title Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates Abstract Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. N/A Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. 1 INTRODUCTION The main goal of machine learning is the ability to generalize from the given training data to unseen examples. However, in practice the achievable degree of generalization is limited. While in the ideal case an end-to-end system learns complex functions from minimum input, it is often necessary to introduce a certain amount of prior knowledge. Such prior knowledge operates as an inductive bias and therefore has a constraining effect on the hypothesis space, i.e., the set of all possible functions that can be learned by the learning algorithm (Mitchell, 1980). While this sounds counter-intuitive such a reduction of the hypothesis space may lead to better generalization properties in practice (Mitchell, 1980). Hence, instead of eliminating the bias to increase generalization (as suggested by Hessel et al. (2019)), a promising direction of research tries to identify and introduce the right form of it. We can achieve this by limiting the functions that can be expressed by the learning algorithm or by introducing bias to the learning algorithm itself. Simple examples include the choice for linear activations to only allow approximations of linear functions or to add a regularization term to the objective function. Similar to this, we can also improve generalization by training on different tasks (Baxter, 2000) from a task family at the same time or by introducing auxiliary tasks (Jaderberg et al., 2017). This is commonly known as multitask learning and has shown to not only improve generalization properties but also to be more sample-efficient (Baxter, 2000). Due to the limited availability of data for training we need a well-tuned inductive bias. Hence, such choices are crucial for the final real-world performance of any machine learning algorithm. While multitask learning is a great tool to improve generalization and to reduce the amount of samples that are necessary to learn a family of tasks it is still limited in its scalability. Both the amount of tasks that can be learned and the amount of data required to learn them are strongly limiting factors. Consider, for instance, a reinforcement learning setup where an agent learns different tasks from interacting with in an environment. In practice we are limited in storing the data for all relevant tasks required to train a model on all tasks jointly. However, learning those tasks sequentially is also not an option as gradient descent and its variants (which are the dominant learning approaches for neural networks) do not consider the importance of individual parameters for early tasks. This destructive learning is commonly termed as catastrophic forgetting (McCloskey & Cohen, 1989). While in the context of fine-tuning and pre-training (Erhan et al., 2009) this does not bear a problem (as the goal is not to reuse the previous parameter state, but rather to optimize the learning process for some target task) it becomes important in multitask problems where we wish to maximize generalization and sample-efficiency. It is also critical in the continual learning framework, where the parameters of a neural network are optimized over multiple datasets (representing different tasks) provided sequentially, which are not available at later time. The goal is hence to retain all (or most) of the important parameters for previous tasks and to be able to build-up on this knowledge for an arbitrary number of future tasks. Thus, the scalability of learning would only be limited by the capacity of the neural network but not by the properties of the training method. The Bayesian framework (Kirkpatrick et al., 2017; Ritter et al., 2018) is a promising approach to address catastrophic forgetting. The information about former tasks is condensed in a prior, which not only preserves the knowledge about tasks but also introduces an inductive bias based on the learned tasks. Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is a simple yet efficient way to reduce catastrophic forgetting. EWC approximates the prior with a Gaussian centered around the optimized network parameters for previous tasks, where the diagonal precision is given by the diagonal approximation of the Fisher Information Matrix (FIM). This approach has two significant downsides: i) each new task adds a new regularization term that penalizes changes of parameters that are relevant to previous tasks; and ii) the diagonal approximation of the FIM assumes independent network parameters, which leads to information loss with a growing number of tasks. Ritter et al. (2018) extend EWC but still approximate the prior from previous tasks using a Gaussian. They devise a block-diagonal approximation for the prior from the older tasks by defining a quadratic approximation whose solution requires to calculate the Hessian. The Hessian is in turn approximated by the block-diagonal Kronecker-factored approximation. In this work we propose an alternative way of calculating the Hessian, based on well established Hessian-free (Schraudolph, 2002; Pearlmutter, 1994) methods to estimate curvature information of the network parameters. In contrast to Ritter et al. (2018), we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the network parameters when we train the network over a long sequence of tasks. We evaluate our algorithm on permuted MNIST (Kirkpatrick et al., 2017), disjoint MNIST (Ritter et al., 2018) and single-headed disjoint MNIST (Farquhar & Gal, 2019), and compare with state of the art approaches. Our results show that we consistently outperform EWC across all tasks and that we are en par with Ritter et al. (2018) on the disjoint tasks, while our method has significantly lower space complexity compared to both EWC and Kronecker-factored approximation. The remainder of this paper is structured as follows. Section 2 provides background on continual learning, EWC, and Kronecker-factored Laplace approximation. Section 3 describes our method in detail. Section 4 shows the efficiency of our approach and compares it against state of the art on a variety of well-known task sequences. Section 5 discusses related work. Section 6 concludes. 2 BACKGROUND 2.1 CONTINUAL LEARNING AND CATASTROPHIC FORGETTING In the continual learning framework the parameters θ ∈ Rn of a neural network are optimized over multiple datasets D1, . . . ,Dt, . . . ,DT . These individual datasets become available to the training algorithm one after another and usually cannot be revisited at a later time. The goal is to achieve a high accuracy/performance on the current task (represented by the current dataset Dt) while still preserving (high or most of) the performance for all the previously visited tasks. However, this is usually challenging for neural network models as commonly used gradient-based optimization methods cannot distinguish between important and unimportant parameters for previous tasks. As a consequence parameters that are relevant for previous tasks are modified (heavily), which leads to performance degradation when the network is used on any of those previous tasks (Rusu et al., 2016). Hence, to address catastrophic forgetting in neural networks we need to retain the parameters that are important for previous tasks while still allowing the network to learn new tasks. However, at the same time we also want the space complexity of the network to be independent of the amount of tasks that were observed so far (and that are about to come). This means that learning a new task while retaining high performance on all prior tasks should be possible without adding new parameters or regularization terms for each new task, at least as long sufficient capacity is available. As a plus we want to foster some degree of parameter sharing to enable positive transfer effects, e.g., improved sample-efficiency due to the fact that past experience can be reused. 2.2 ELASTIC WEIGHT CONSOLIDATION (EWC) EWC (Kirkpatrick et al., 2017) is a simple yet efficient approach that meets most of the above mentioned requirements. The key idea is to add a penalty when parameters that are important for previous tasks are about to be changed while parameters that are less relevant for previous tasks do not receive a penalty. EWC uses a quadratic penalty term that is derived from a Bayesian formulation of the problem (where all the information of all previous tasks is condensed in the prior) as follows: p(θ|D1:t+1) = p(Dt+1|θ)p(θ|D1:t) p(Dt+1) , (1) where p(θ|D1:t+1) and p(θ|D1:t) are the posterior and prior distributions over the parameters θ of the network and D1, . . . ,Dt,Dt+1 are the datasets corresponding to the respective tasks. If we want to learn a new task we update the posterior by conditioning it on the newly available data Dt+1. However, we have to address two problems that stem from Equation 1. First, maintaining the full posterior over all previous datasets is usually intractable (Ritter et al., 2018; Opper & Winther, 1998) and we instead need to approximate it. Second, without storing the information from all previous tasks there is no easy solution to update the posterior. The first problem can be addressed by approximating the posterior with a Gaussian (MacKay, 1992): p(θ|D1:t) ∼ N (µt,Σt). (2) With two tasks A and B and their datasets DA and DB , for the posterior p(θ|DA) the mean µA is given by the solution for the previous task θ∗A, and the precision Σ −1 A , i.e., the inverse of the covariance, by the diagonal of the Fisher information matrix (FIM) F . Learning tasks A and B consecutively then results in the following objective function: L(θ) = LB(θ) + λ 2 (θ − θ∗A)TF (θ − θ∗A), (3) where LB(θ) is the loss depending on the current data DB , and λ is a hyperparameter that controls the influence of the regularization term. At this point we only need to store the previous weights and the diagonal approximation of the FIM for the previous task. For another task C we store a separate FIM for that new task together with the solution for task B θ∗B , and add another regularization term: L(θ) = LC(θ) + λ 2 (θ − θ∗A)TFA(θ − θ∗A) + λ 2 (θ − θ∗B)TFB(θ − θ∗B). (4) 2.3 KRONECKER-FACTORED LAPLACE APPROXIMATION The diagonal approximation of the FIM assumes the parameters to be independent, which is rarely the case in practice. Ritter et al. (2018) address this shortcoming by adopting the Bayesian online learning approach (Opper & Winther, 1998). As the prior p(θ|D1:t) preserves all the information about the previous tasks recursively using the previous posterior as the next prior makes it possible to find a MAP-estimate θ∗ = arg maxθ p(θ|D1, . . . ,Dt+1) sequentially. Due to the fact that the posterior conditioned on all previous tasks is intractable, a parameterization of the posterior p ( θ|Dt+1, w(t) ) with parameters w(t) is introduced. To update this parametric approximate posterior requires two steps: 1. Update Step: in an update step the old approximative posterior p(θ|w(t)) is used to perform an update using the Bayesian rule (see Ritter et al. (2018) for a detailed analysis): p(θ|Dt+1, w(t)) = p(Dt+1|θ)p(θ|w(t))∫ dθ′p(Dt+1|θ′)p(θ′ |w(t)) (5) 2. Projection Step: In a projection step the new posterior p(θ|Dt+1, w(t)) is projected onto the same parametric family as p ( θ|w(t) ) (as they are usually not from the same parametric family): q(θ|w(t+ 1)) ≈ p(θ|Dt+1, w(t)). (6) Similar to EWC the update step can be approximated by a Gaussian approximate posterior: L(θ) = Lt+1(θ) + 1 2 (θ − µt)TΣ−1t (θ − µt). (7) As before, the mean µt is given by the solution for the previous task θ∗t . Accordingly, the parameters w(t) are given by w(t) = {µt,Σ−1t }. The core improvement that this framework offers is encapsulated in the projection step: instead of adding a new regularization term for each new task, Σ−1t is instead projected to Σ−1t+1 which then maintains information about all tasks up to task t + 1. Ritter et al. (2018) realize this by computing the Hessian around the most recent solution θ∗t+1, and adding it to the Hessians from all previous solutions: Σ−1t+1 = Ht+1(θ ∗ t+1) + Σ −1 t , where Ht+1(θ ∗ t+1) = − ∂2p(Dt+1|θ) ∂θ∂θ ∣∣∣∣ θ=θ∗t+1 (8) This way information about previous tasks can be preserved while still limiting the storage requirements to a constant number of parameters. However, in practice this approach needs to store a set of parameters per task. 3 HESSIAN-FREE CURVATURE ESTIMATION Previous approaches identify the most important parameters for each previous task and then prevent the modification of those parameters during the training of a new task. EWC uses the diagonal of the FIM while Ritter et al. (2018) use a Hessian approximated using the block-diagonal Kroneckerfactored approximation. We address the same problem but approach it differently. We build upon the intuition of metalearning in general and from the model-agnostic meta learning (MAML) algorithm (Finn et al., 2017) in particular. MAML identifies model parameters that (upon modification) lead to faster learning for all tasks in a given task distribution. By defining a meta-learning objective and using available data for all tasks in the task distribution it learns network weights that will lead to faster learning and generalization in new tasks, if being used as a starting point for the optimization. In our case, apart from the fact that we assume no access to samples from previous tasks, we invert the intuition behind MAML: we identify model parameters that are sensitive to changes in each task but instead of tuning these parameters to be a good starting point for the fine-tuning of all tasks, we penalize large changes to them, as this will deteriorate the performance of previous tasks. In order to identify the important network parameters, i.e., parameters that upon being changed lead to a big change in the loss, we also use the Hessian matrix, but in contrast to the Kronecker-factored Laplace approximation we exploit the fact that most regions of the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as this subset already holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. In essence, we need to estimate directions with high curvature as at those points we find the important weights of the network. However, any computation involving the exact Hessian for larger networks is infeasible in practice. Hence, it is key to find a good approximation of the Hessian while still preserving enough curvature information to determine which parameters are crucial for the previous tasks. Fortunately, as most regions in the loss surface are flat it is sufficient to only extract information about the few regions that exhibit a high curvature. Thus, instead of computing the full Hessian we compute a Hessian-vector-product, which is similar to sampling the curvature in the direction of a given vector. There are two important questions to answer here: (i) how to efficiently calculate the Hessian-vector product, and (ii) how to chose a suitable vector/direction. An efficient Hessian-vector-product calculation was initially presented in Pearlmutter (1994) and has subsequently been used for several Hessian-free (also called truncated-Newton) optimization methods (Schraudolph, 2002; Martens, 2010). The key idea is that the Hessian is not calculated explicitly. Instead, for a given vector v the Hessian-vector-product Hv is directly computed using finite differences (Martens, 2010) at the cost of a forward- and a backward-pass through the network (e.g., using algorithms such as back-propagation). The Hessian-vector-product is then calculated by (see Pearlmutter (1994) for the implementation details): Hv = lim →0 ∇f(θ + v)−∇f(θ) = ∂ ∂ ∇f(θ + v) ∣∣∣∣ =0 (9) Given that the Hessian-vector-product can be computed as described above, the second question is how to choose the vector v that defines the direction in which we sample the curvature. Inspired by Stochastic Meta-Descent (Bray et al., 2004a;b), which uses the combination of the momentum and a Hessian-vector-product to estimate gradient directions with low curvature, our first choice to select the vector v is to use the momentum. In our case the momentum is calculated using the exponentially weighted moving average of the past gradients: vt+1 = αvt + (1− α)∇f(θ), (10) where α controls the discount of older observations. The momentum is a sensible choice for the vector as it holds information about the parameters that have been changed the most during the training. The assumption is then that exactly these parameters will be among the most important ones for the most recent task. As such, if the parameters for the previous task θ∗t−1 are at an optimum, any change to important parameters results in a performance drop. An alternative to the momentum is the eigenvector corresponding to the largest eigenvalue. This eigenvector represents the direction of highest curvature, and therefore by definition includes the most important parameters for the most recent task. A simple way to compute this eigenvector is to use the power method (Wilkinson, 1965), which entails computing a Hessian-vector-product. Both versions result in a vector which maintains critical information about second-order interactions. From this vector we construct a positive semidefinite matrix by placing its absolute values as the entries of a diagonal matrix. Let ht be the resulting vector of the Hessian-vector-product Hv for task t, then our curvature estimate Ct is given as: Ct = |ht,1| . . . |ht,n| , (11) with n the number of network parameters. The projection step then is defined as: Σ−1t = Ct + Σ −1 t−1, (12) and the final objective function for a new task t+ 1 as: L(θ) = Lt+1(θ) + λ 2 (θ − θ∗t )TΣ−1t (θ − θ∗t ) (13) Similar to Kirkpatrick et al. (2018) and Ritter et al. (2018) we add a hyperparameter λ to control the influence of the regularization term on the overall loss, i.e., that controls how to weigh the importance of the previous tasks over the most recent task. One of the main advantages of our approach is the low storage requirements. Following the analysis in Ritter et al. (2018), Kronecker-factor approximation approach requires that all Hessians for previous tasks are kept in memory and the same holds for EWC, as the diagonal approximation of the FIM for all previous tasks are required to learn each new task. Instead, our approach only needs to store two vectors with the same size as the network parameters independently of the size of the task sequence. 4 EXPERIMENTS In our experiments, we compare both of our Hessian-free curvature estimations (eigenvector and momentum) to closely related methods, i.e.. EWC (Kirkpatrick et al., 2017) and Kronecker-factored approximation (Ritter et al., 2018). For both EWC and Kronecker-factored approximation we adapt the implementation from https://github.com/hannakb/KFA. We release the source code of our methods upon publication. 4.1 PERMUTED MNIST For our first evaluation, we utilize the widely-used permutedMNIST dataset as presented in Goodfellow et al. (2013) and used in Kirkpatrick et al. (2017) and Ritter et al. (2018). The dataset contains 28× 28 grey-scale images, that are permuted randomly in order to generate new tasks. Each permutation is a truly new task, since it is unrecognizable from its original. For the evaluation, we perform a hyperparameter search with the following range of parameters: i) network structure: either 1 layer with 200 hidden units or 2 layers with 100 hidden units each; ii) λ ∈ [1, 2, 3, 10, 20, 30, 100, 300]. We use the ADAM optimizer with a learning rate of 0.001, a momentum of 0.5, and a batch size of 64 over 10 epochs. Figure 1 shows the mean average accuracy over all 50 tasks with the best hyperparameters discovered for each method. While Kronecker-factor approximation achieves 83.82%, Hessian-free curvature estimation achieves 62.58% and Hessian-free curvature estimation with the largest eigenvector achieves 61.63%, leading to better results compared to EWC (51.62%) for the last 15 tasks. Even though Kronecker-factored approximation achieves better performance compared to our approach, according to Farquhar & Gal (2019) in order to evaluate continual learning approaches other tasks can be more representative. In fact, Farquhar & Gal (2019) suggest to use a specific version of disjointMNIST which we evaluate below. 4.2 DISJOINTMNIST For an evaluation according to the DisjointMNIST (Ritter et al., 2018) we split MNIST into two tasks: (1) letters ’0’ to ’4’ and (2) letters ’5’ to ’9’. For this experiment we use a network with a ten-way classifier which makes the problem considerably more challenging than in the previous experiment where we used a five-way classifier. Hence, here the classifier learns a strong (bad) prior for the (respective) unseen classes in the datasets. It is more difficult as training on the second split can easily overwrite the parameters of the ten-way classifiers for the classes of the first split. We use a simple dense feed-forward network architecture with 2 layers and 100 hidden units in each layer as well as a batch size of 250 as reported in Ritter et al. (2018). We use 10 epochs and the same Adam parameters as in the PermutedMNIST experiment. This allows a comparison of our results against Kronecker-factored approximation and EWC. Following the same evaluation procedure from Ritter et al. (2018) Figure 2a illustrates the result of a hyperparameter search over λ ∈ [100, 101, . . . , 107] for EWC, Kronecker-factored approximation, and ours (i.e., Hessian-free curvature estimation using either the largest eigenvector or the momentum to estimate v). The results show the balancing of retaining information on old tasks over the learning accuracy on new tasks. Note that the different scales in λ between our results and that from Ritter et al. (2018) only stem from different implementation details (but the results are still comparable). Similar to the PermutedMNIST experiment, we see that our approach (using the momentum) outperforms EWC with 91.01% (at λ = 106) vs 86.11% (which is what we expected as EWC disregards parameter dependencies that are not reflected by the diagonal of the FIM). Surprisingly, our approach is even comparable to the Kronecker-factored approximation (which reaches 94.93%) although our method uses considerably less storage memory to store information on the importance of parameters. The use of the largest eigenvector on the other hand performs poorly compared to the other methods with 72.69% for λ = 106. 4.3 SINGLE-HEADED SPLIT MNIST For the Single-Headed-Split-MNIST task (Farquhar & Gal, 2019) the available digits are split into five groups (i.e., tasks) of two classes each. The classifier (as for the PermutedMNIST) uses ten outputs, i.e., one for each digit, and the network is trained on each task one after another. In contrast to some other work (Zenke et al., 2017) all the tasks share the classifier head instead of having multiple task-specific outputs. Hence, the predictions are made for all possible outputs, not only for the outputs of classes that belong the most recent task. We use the same network as in the previous experiments (i.e., 2 layers of 100 hidden units each) and a batch of 64. Figure 2b shows the results after a hyperparameter search over λ. As in the previous experiments we can observe that both of our Hessian-free curvature estimations consistently outperform EWC (Hessian-free with momentum achieves 57.54% and the eigenvector approach 55.36% while EWC reaches 46.73%) and that the momentum-based variant even comes again close to the Kronecker-factored approximation (which is at 57.2% at the end). 5 RELATED WORK Related work around the field of catastrophic forgetting is mainly driven by regularization methods, rehearsal methods, and dynamic architecture methods. Regularization Methods. Elastic Weight Consolidation (Kirkpatrick et al., 2017) measures the distance between the network weights for the current task and the weight state of previous tasks, and applies a quadratic penalty weighted by a diagonal approximation of the Fisher information matrix to ensure that the new weights are not too far from the old weights. EWC only penalizes important parameters while the parameters that have no influence on the performance of previous tasks are allowed to change freely. Similar approaches have been proposed by Aljundi et al. (2018) and Lee et al. (2017). The main difference is how the importance of parameters for previous tasks are approximated. However, all these approaches have limited performance as they do not consider interactions between the parameters. Instead of using the diagonal of the Fisher information matrix (Ritter et al., 2018) apply a Kronecker-factored approximation of the Hessian. This leads to strong improvements over EWC. This approach is most similar to ours, as it attempts to capture second-order parameter interactions to regularize parameter change. The main difference to our method is the usage of the Kronecker factorization to store the Hessian in a compact way while we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. Rehearsal Methods. Rehearsal methods attempt to reduce catastrophic forgetting by replaying examples of previous tasks when learning a new task. A first approach here is to not only learn the actual task at hand but also the distribution of the training data. When a new task is learned, artificial samples from this learned distribution are added to the current set of training data. Typically this is done by adding a Variational Autoencoder (Kamra et al., 2017). Recent approaches (Shin et al., 2017) also employ generative adversarial networks with promising results. A second, more direct approach preserves a subset of the training data for each task in an episodic memory and reuses it to constrain the learning process of future tasks (Lopez-Paz et al., 2017). However, while being effective in reducing catastrophic forgetting in general, both approaches have shortcomings as the inherent problem of catastrophic forgetting is simply shifted to a scalability problem. In generative approaches samples for all previous tasks must be replayed each time to preserve old parameter states and as the number of tasks increases this becomes problematic. Similarly for the direct approach, even if only a small subset of examples for each task is preserved, still we can end up with a large dataset as the number of tasks increases. Dynamic Architecture Methods. Another way to address catastrophic forgetting is to incrementally increase the capacity of the architecture. Approaches vary mainly in whether new capacity is added for each new task by default, or whether this is determined by a metric. Progressive Neural Networks (Rusu et al., 2016) add a new network for each new task and each new network is connected via lateral connections to the old ones to allow for transfer from previous tasks to the current one. This avoids catastrophic forgetting by design but as each new task requires a new network this approach does not scale well with the number of tasks. In contrast to Progressive Nets other approaches only add capacity when it is necessary. Part & Lemon (2016) present an approach based on Self-Organizing Map, which employs a similarity metric to determine whether a new node should be added to the network. Similar to this, Xiao et al. (2014) start out with a classifier with one super class and add new parameters, based on an error signal. Depending on the error made by the current model, only the final layer is extended by another output dimension, or a whole new sub-network is added as a subclass. Yoon et al. (2018) use the combination of sparsity and breadth-first-search to determine which parameters should be retrained for the current task. If the features learned so far are not able to represent the new task, more capacity is added dynamically (as in Xiao et al. (2014)). While these methods suffer significantly less from scalability issues, their main disadvantage lies in the fact that they have very stringent architectural constraints, which cannot be easily transferred to any arbitrary existing model. 6 CONCLUSION This paper addressed catastrophic forgetting within a continual learning framework where the ultimate goal lies in the identification of the network weights that are important to previously learned tasks. While previous work in this direction is either limited in the achievable accuracy (as it only considers the diagonal of the Fisher Information Matrix) or limited in number of tasks (as they need to store information that grows linearly with the number of tasks) we set out to provide a first approach that uses second-order parameter dependencies with constant space complexity. We exploit the fact that most regions in the loss surface are flat, which allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the parameters when we train the network over a long task sequence. We evaluated our algorithm on three widely used benchmarks and compared it with state of the art. Our results show that we consistently outperform EWC across all benchmarks and that we are better or at least en par with Kronecker-factor approximation, while our method at the same time requires significantly less memory.
1. What is the main contribution of the paper in tackling catastrophic forgetting? 2. What are the strengths and weaknesses of the proposed method compared to previous approaches like EWC? 3. Do you have any concerns regarding the low-rank approximation to the Hessian and its impact on performance? 4. How does the reviewer assess the effectiveness of the proposed method based on the results presented in Figures 1 and 2? 5. Are there any limitations or areas for improvement in the proposed approach that the reviewer would like to highlight?
Review
Review This paper proposes a method for tackling catastrophic forgetting. Similar to previous methods such as EWC (Kirkpatrick et al., 2017), they penalize parameter updates that align with the Fisher information matrix of the previous tasks. This will prevent the model from changing the previously useful parameters. They try to match the result of previous fisher-based methods but at a lower computational cost. They propose using a low-rank approximation to the Hessian using Hessian-vector-product with two types of vectors: the momentum velocity vector and the largest eigen-vector of the hessian. Then they build a diagonal approximation to the Hessian. Cons: - Eq 11, there is no justification for forming a curvature matrix by putting the absolute value of the hessian-vector-product with the proposed vectors on the diagonal. Particularly considering the largest eigen-value, Hv will be a vector of zeros with exactly one 1. This does not seem to be a good estimate of the hessian. - Fig 1, the proposed method seem to perform poorly compared to the kfac-based method on permuted mnist. - Figure 2 mainly compares to EWC as a baseline. In Farquhar & Gal (2019), other methods such as VGR perform significantly better. The proposed method is not competitive with state-of-the-art.
ICLR
Title Learning to Represent Edits Abstract We introduce the problem of learning distributed representations of edits. By combining a “neural editor” with an “edit encoder”, our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem. 1 INTRODUCTION One great advantage of electronic storage of documents is the ease with which we can edit them, and edits are performed in a wide variety of contents. For example, right before a conference deadline, papers worldwide are finalized and polished, often involving common fixes for grammar, clarity and style. Would it be possible to automatically extract rules from these common edits? Similarly, program source code is constantly changed to implement new features, follow best practices and fix bugs. With the widespread deployment of (implicit) version control systems, these edits are quickly archived, creating a major data stream that we can learn from. In this work, we study the problem of learning distributed representations of edits. We only look at small edits with simple semantics that are more likely to appear often and do not consider larger edits; i.e., we consider “add definite articles” rather than “rewrite act 2, scene 3.” Concretely, we focus on two questions: i) Can we group semantically equivalent edits together, so that we can automatically recognize common edit patterns? ii) Can we automatically transfer edits from one context to another? A solution to the first question would yield a practical tool for copy editors and programmers alike, automatically identifying the most common changes. By leveraging tools from program synthesis, such groups of edits could be turned into interpretable rules and scripts (Rolim et al., 2017). When there is no simple hard rule explaining how to apply an edit, an answer to the second question would be of great use, e.g., to automatically rewrite natural language following some stylistic rule. We propose to handle edit data in an autoencoder-style framework, in which an “edit encoder” f∆ is trained to compute a representation of an edit x− → x+, and a “neural editor” α is trained to construct x+ from the edit representation and x−. This framework ensures that the edit representation is semantically meaningful, and a sufficiently strong neural editor allows this representation to not be specific to the changed element. We experiment with various neural architectures that can learn to represent and apply edits and hope to direct the attention of the research community to this new and interesting data source, leading to better datasets and stronger models. Briefly, the contributions of our paper are: (a) in Sect. 2, we present a new and important machine learning task on learning representations of edits (b) we present a family of ∗Work done as an intern in Microsoft Research, Cambridge, UK. models that capture the structure of edits and compute efficient representations in Sect. 3 (c) we create a new source code edit dataset, and release the data extraction code at https://github.com/Microsoft/msrc-dpu-learning-to-represent-edits and the data at http://www.cs.cmu.edu/˜pengchey/githubedits.zip. (d) we perform a set of experiments on the learned edit representations in Sect. 4 for natural language text and source code and present promising empirical evidence that our models succeed in capturing the semantics of edits. 2 TASK In this work, we are interested in learning to represent and apply edits on discrete sequential or structured data, such as text or source code parse trees1. Figure 1 gives a graphical overview of the task, described precisely below. Edit Representation Given a dataset of edits {x(i)− → x (i) + }Ni=1, where x (i) − is the original version of some object and x(i)+ its edited form (see upper half of Figure 1 for an example), our goal is to learn a representation function f∆ that maps an edit operation x− → x+ to a real-valued edit representation f∆(x−,x+) ∈ Rn. A desired quality of f∆ is for the computed edit representations to have the property that semantically similar edits have nearby representations in Rn. Having distributed representations also allows other interesting downstream tasks, e.g., unsupervised clustering and visualization of similar edits from large-scale data (e.g. the GitHub commit stream), which would be useful for developing human-assistance toolkits for discovering and extracting emerging edit patterns (e.g. new bug fixes or emerging “best practices” of coding). Neural Editor Given an edit representation function f∆, we want to learn to apply edits in a new context. This can be achieved by learning a neural editor α that accepts an edit representation f∆(x−,x+) and a new input x′− and generates x ′ +. 2 This is illustrated in the lower half of Figure 1. 3 MODEL We cast the edit representation problem as an autoencoding task, where we aim to minimize the reconstruction error of α for the edited version x+ given the edit representation f∆(x−,x+) and the original version x−. By limiting the capacity of f∆’s output and allowing the model to freely use information about x−, we are introducing a “bottleneck” that forces the overall framework to not simply treat f∆(x−,x+) as an encoder of x+. The main difference from traditional autoencoders is that in our setup, an optimal solution requires to re-use as much information as possible from x− 1Existing editing systems, e.g. the grammar checker in text editors and code refactoring module in IDEs, are powered by domain-specific, manually crafted rules, while we aim for a data-driven, domain-agnostic approach. 2We leave the problem of identifying which edit representation f∆(x−,x+) to apply to x′− as interesting future work. to make the most of the capacity of f∆. Formally, given a probabilistic editor function Pα such as a neural network and a dataset {x(i)− → x (i) + }Ni=1, we seek to minimize the negative likelihood loss L = − 1 N ∑ i logPα(x+ | x−, f∆(x−,x+)). Note that this loss function can be interpreted in two ways: (1) as a conditional autoencoder that encodes the salient information of an edit, given x− and (2) as an encoder-decoder model that encodes x− and decodes x+ conditioned on the edit representation f∆(x−,x+). In the rest of this section, we discuss our methods to model Pα and f∆ as neural networks. 3.1 NEURAL EDITOR As discussed above, α should use as much information as possible from x−, and hence, an encoderdecoder architecture with the ability to copy from the input is most appropriate. As we are primarily interested in edits on text and source code in this work, we explored two architectures: a sequenceto-sequence model for text, and a graph-to-tree model for source code, whose known semantics we can leverage both on the encoder as well as on the decoder side. Other classes of edits, for example, image manipulation, would most likely be better served by convolutional neural models. Sequence-to-Sequence Neural Editor First, we consider a standard sequence-to-sequence model with attention (over the tokens of x−). The architecture of our sequence-to-sequence model is similar to that of Luong et al. (2015), with the difference that we use a bidirectional LSTM in the encoder and a token-level copying mechanism (Vinyals et al., 2015) that directly copies tokens into the decoded sequence. Whereas in standard sequence-to-sequence models the decoder is initialized with the representation computed by the encoder, we initialize it with the concatenation of encoder output and the edit representation. We also feed the edit representation as input to the decoder LSTM at each decoding time step. This allows the LSTM decoder to take the edit representation into consideration while generating the output sequence. Graph-to-Tree Neural Editor Our second model aims to take advantage of the additional structure of x− and x+. To achieve this, we combine a graph-based encoder with a tree-based decoder. We use T (x) to denote a tree representation of an element, e.g., the abstract syntax tree (AST) of a fragment of source code. We extend T (x) into a graph form G(x) by encoding additional relationships (e.g., the “next token” relationship between terminal nodes, etc.) (see Figure 2(a)). To encode the elements of G(x−) into vector representations, we use a gated graph neural network (GGNN) (Li et al., 2015). Similarly to recurrent neural networks for sequences (such as biRNNs), GGNNs compute a representation for each node in the graph, which can be used in the attention mechanisms of a decoder. Additionally, we use them to obtain a representation of the full input x−, by computing their weighted average following the strategy of Gilmer et al. (2017) (i.e., computing a score for each node, normalizing scores with a softmax, and using the resulting values as weights). Our tree decoder follows the semantic parsing model of Yin & Neubig (2018), which sequentially generate a tree T (x+) as a series of expansion actions a1 . . . aN . The probability of taking an action is modeled as p(at | a<t, s), where s is the input (a sequence of words in the original semantic parsing setting) and a<t is the partial tree that has been generated so far. The model of Yin & Neubig (2018) mainly uses two types of actions: EXPANDR expands the current non-terminal using a grammar rule, and GENTERM generates a terminal token from a vocabulary or copies a token from s3. The dependence on the partial tree a<t is modeled by an LSTM cell which is used to maintain state throughout the generation procedure. Additionally, the LSTM receives the decoder state used to pick the action at the parent node as an additional input (“parent-feeding”). This process illustrated in Figure 2(b). We extend this model to our setting by replacing the input sequence s by x−; concretely, we condition the decoder on the graph-level representation computed for G(x−). Additionally, we use the change representation f∆(·) as an additional input to the LSTM initial state and at every decoding step. Based on the observation that edits to source code often manipulate the syntax tree by moving expressions around (e.g. by nesting statements in a conditional, or renaming a function while keeping its arguments), we extend the decoding model of Yin & Neubig (2018) by adding a facility to copy entire subtrees from the input. For this, we add a decoder action TREECP. This action is similar to standard copying mechanism known from pointer networks (Vinyals et al., 2015), but instead of copying only a single token, it copies the whole subtree pointed to. However, adding the TREECP action means that there are many correct generation sequences for a target tree. This problem appears in token-copying as well, but can be easily circumvented by marginalizing over all correct choices at each generation step (by normalizing the probability distribution over allowed actions to sum up those that have the same effect). In the subtree-copying setting, the lengths of action sequences representing different choices may differ. In our implementation we handle this problem during training by simpling picking the generation sequence that greedily selecting TREECP actions. 3.2 EDIT REPRESENTATION To compute a useful edit representation, a model needs to focus on the differences between x− and x+. A risk in our framework is that f∆ degenerates into an encoder for x+, turning α into a decoder. To avoid this, we need to follow the standard autoencoder trick, i.e. it is important to limit the capacity of the result of f∆ by generating the edit representation in a low-dimensional space RN . This acts as a bottleneck and encodes only the information that is needed to reconstruct x+ from x−. We again experimented with both sequence-based and graph-based representations of edits. Sequence Encoding of Edits Given x− (resp. x+) as sequence of tokens t (0) − , . . . t (T−) − (resp. t (0) + , . . . t (T+) + ), we can use a standard (deterministic) diffing algorithm to compute an alignment of tokens in the two sequences. We then use extra symbols ∅ for padding, + for additions, − for deletions,↔ for replacements, and= for unchanged tokens to generate a single sequence representing both x− and x+. This is illustrated in Figure 3(a). By embedding the three entries in each element of the sequence separately and concatenating their representation, they can be fed into a standard sequence encoder whose final state is our desired edit representation. In this work, we use a biLSTM. 3EXPANDR corresponds to the APPLYCONSTR action in the original model of Yin & Neubig (2018). There is also a REDUCE action which marks the end of expanding a non-terminal with non-deterministic number of child nodes. See Yin & Neubig (2018) for details. Graph Encoding of Edits As in the graph-to-tree neural editor, we represent x− and x+ as trees T (x−) and T (x+). We combine these trees into a graph representation G(x− → x+) by merging both trees into one graph, using “Removed”, “Added” and “Replaced” edges. To connect the two trees, we compute the same alignment as in the sequence case, connecting leaves that are the same and each replaced leaf to its replacement. We also propagate this information up in the trees, i.e., two inner nodes are connected by “=” edges if all their descendants are connected by “=” edges. This is illustrated in Figure 3(b). Finally, we also use the same “+” / “-” / “↔” / “=” tags for the initial node representation, computing it as the concatenation of the string label (i.e. token or nonterminal name) and the embedding of the tag. To obtain an edit representation, we use a GGNN unrolled for a fixed number of timesteps and again use the weighted averaging strategy of Gilmer et al. (2017). 4 EVALUATION Evaluating an unsupervised representation learning method is challenging, especially for a newly defined task. Here, we aim to evaluate the quality of the learned edit representations with a series of qualitative and quantitative metrics on natural language and source code. 4.1 DATASETS AND CONFIGURATION Natural Language Edits We use the WikiAtomicEdits (Faruqui et al., 2018) dataset of pairs of short edits on Wikipedia articles. We sampled 1040K edits from the English insertion portion of the dataset and split the samples into 1000K/20K/20K train-valid-test sets. Source Code Edits To obtain a dataset for source code, we clone a set of 54 C# projects on GitHub and collected a GitHubEdits dataset (see Appendix A for more information). We selected all changes in the projects that are no more than 3 lines long and whose surrounding 3 lines of code before and after the edited lines have not been changed, ensuring that the edits are separate and short. We then parsed the two versions of the source code and take as x− and x+ the code that belongs to the top-most AST node that contains the edited lines. Finally, we remove trivial changes such variable renaming, changes within comments or formatting changes. Overall, this yields 111 724 edit samples. For each edit we run a simple C# analysis to detect all variables and normalize variable names such that each unique variable within x− and x+ has a unique normalized name V0, V1, etc. This step is necessary to avoid the sparsity of data induced by the variety of different identifier naming schemes. We split the dataset into 91,372 / 10,176 / 10,176 samples as train/valid/test sets. Additionally, we introduce a labeled dataset of source code edits by using C# “fixers”. Fixers are small tools built on top of the C# compiler, used to perform common refactoring and modernization tasks (e.g., using new syntactic sugar). We selected 16 of these fixers and ran them on 6 C# projects to generate a small C#Fixers dataset of 2,878 edit pairs with known semantics. We present descriptions and examples of each fixer in Appendix A. Configuration Throughout the evaluation we use a fixed size of 512 for edit representations. The size of word embeddings and hidden states of encoding LSTMs is 128. The dimensionality of the decoding LSTM is set to 256. Details of model configuration can be found in Sect. A. When generating the target x+, our neural editor model can optionally take as input the context of the original input x− (e.g., the preceding and succeeding code segments surrounding x−), whose information could be useful for predicting x+. For example, in source code edits the updated code snippet x+ may reuse variables defined in the preceding snippet. In our code experiments, we use a standard bidirectional LSTM network to encode the tokenized 3 lines of code before and after x− as context. The encoded context is used to initialize the decoder, and as an additional source for the pointer network to copy tokens from. 4.2 QUALITY OF EDIT REPRESENTATIONS First, we study the ability of our models to encode edits in a semantically meaningful way. Visualizing Edits on Fixers Data In a first experiment, we train our sequential neural editor model on our GitHubEdits data and then compute representations for the edits generated by the C# fixers. A t-SNE visualization (Maaten & Hinton, 2008) of the encodings is shown in Figure 4. For this visualization, we randomly selected 100 examples from the edits of each fixer (if that fixer has more than 100 samples) and discarded fixer categories with less than 40 examples. Readers are referred to Appendix A for detailed descriptions of each fixer category. We find that our model produces dense clusters for simple or distinctive code edits, e.g. fixer RCS1089 (using the ++ or -- unary operators instead of a binary operator (e.g., i = i + 1 → i++), and fixer CA2007 (adding .ConfigureAwait(false) for await statements). We also analyzed cases where (1) the edit examples from the same fixer are scattered, or (2) the clusters of different fixers overlap with each other. For example, the fixer RCS1077 covers 12 different aspects of optimizing LINQ method calls (e.g., type casting, counting, etc.), and hence its edits are scattered. On the other hand, fixers RCS1146 and RCS1206 yield overlapping clusters, as both fixers change code to use the ?. operator. Fixers RCS1207 (change a lambda to a method group, e.g. foo(x=>bar(x)) → foo(bar)) and RCS1021 (simplify lambda expressions, e.g. foo(x=>{return 4;}) → foo(x=>4)) are similar, as both inline lambda expressions in two different ways. Analysis yields that the representation is highly dependent on surface tokens. For instance, IDE004 (removing redundant type casts, e.g. (int)2 → 2) and RCS1207 (removing explicit argument lists) yield overlapping clusters, as both involve deleting identifiers wrapped by parentheses. Human Evaluation on Encoding Natural Language WikiAtomicEdits In a second experiment, we test how well neighborhoods in edit representation space correspond to semantic similarity. We computed the five nearest neighbors of 200 randomly sampled seed edits from our training set, using both our trained sequence-to-sequence editing model with sequential edit encoder, as well as a simple bag-of-words baseline based on TF-IDF scores. We then rated the quality of the retrieved neighbors on a scale of 0 (“unrelated edit”), 1 (“similar edit”) and 2 (“semantically or syntactically same edit”). Details of the annotation schema is included in Sect. E. We show the (normalized) discounted cumulative gain (DCG, Manning et al. (2008)) for the two models at the top of Tab. 1 (higher is better). The relevance scores indicate that our neural model clearly outperforms the simplistic baseline. Tab. 1 also presents two example edits with their nearest neighbors. Example 1 shows that the neural edit models succeeded in representing syntactically and semantically similar edits, while the bag-of-words baseline relies purely on surface token overlap. Interestingly, we also observed that the edit representations learned by the neural editing model on WikiAtomicEdits are somewhat sensitive to position, i.e. the position of the inserted tokens in both the seed edit and the nearest neighbors is similar. This is illustrated in Example 2, where the second (“senegalese striker”) and the third (“republican incumbent”) nearest neighbors returned by the neural model have similar editing positions as the seed edit, while they are semantically diverse. 4.3 EDIT ENCODER PERFORMANCE To evaluate the performance of our two edit encoders discussed in Sect. 3.2 and disentangle it from the choice of neural editor, we train various combinations of our neural editor model and manually evaluate the quality of the edit representation. More specifically, we trained our neural editor models on GitHubEdits and randomly sampled 200 seed edits and computed their 3 nearest neighbors using each end-to-end model. We then rated the resulting groups using the same 0-2 scale as above. The resulting relevance scores are shown in Tab. 2. Bag of Words Model Seq2Seq – Seq Edit Encoder DCG/NDCG@5 9.3 / 67.3% 13.5 / 90.3% DCG@5 (by edit size) 1: 14.7 2-3: 10.8 >3: 5.4 1: 16.2 2-3: 12.9 >3: 12.4 Example 1 Idaniel james nava ( born february 22 , 1983 ) is an american professional baseball outfielderJ nava is only the fourth player in mlb history to hit a grand slam in his first major league at bat and the second to do it on the first pitch . NN-1 he batted .302 with 73 steals , and received a september call - up to the major leagues Ias an outfielderJ . Iarthur ray briles ( born december 3 , 1955 ) is a former american football coach andJ his most recent head coaching position was at baylor university , a position he held from the 2008 season through the 2015 season . NN-2 he played Ias an outfielderJ for the hanshin tigers . Ijonathan david disalvatore ( born march 30 , 1981 ) is a professional ice hockeyJ he was selected by the san jose sharks in the 4th round ( 104th overall ) of the 2000 nhl entry draft . NN-3 in 2012 , his senior at oak mountain , dahl had a .412 batting average , 34 runs batted in ( rbis ) , and 18 stolen bases Ias an outfielder .J Iprofessor paul talalay ( born march 31 , 1923 ) is the john jacob abelJ distinguished service professor of pharmacology and director of the laboratory for molecular sciences at johns hopkins school of medicine in baltimore . Example 2 she , along with her follow artist carolyn mase studied with Iimpressionist landscape painterJ john henry twachtman at the art students league of new york . NN-1 his brother was draughtsman william daniell and his uncle was Ilandscape painterJ thomas daniell . the first painting was a portrait of a young girl , emerantia van beresteyn , the sister of Ithe landscape painterJ nicolaes van beresteyn , the later founder of half of this hofje . NN-2 william james linton ( december 7 , 1812 - december 29 , 1897 ) was an english - born american wood engraver , Ilandscape painter ,J political reformer and author of memoirs , novels , poetry and non-fiction . he was the club ’s top scorer with 22 goals in all competitions , one more than Isenegalese strikerJ lamine diarra , who left the club at the end of the season . NN-3 early on , hopper modeled his style after chase and french IimpressionistJ masters douard manet and edgar degas . caforio ” aggressively attacked ” his opponent , Irepublican incumbentJ steve knight , for his delayed response to the leak . Table 1: Natural language human evaluation results and 3 nearest neighbors. IInserted textJ marked. Example 1 neural editing model returns syntactically and semantically similar edits. Example 2 Neural edit representations are sensitive to position. Comparing the sequential edit encoders trained with Seq2Seq and Graph2Tree editors, we found that the edit encoder trained with the Graph2Tree objective performs better. We hypothesize that this is because the Graph2Tree editor better captures structural-level information about an edit. For instance, Example 1 in Tab. 3 removes explicit type casting. The Seq2Seq editor has difficulty distinguishing this type of edit, confusing it with changes of lambda expressions to method groups (1st and 2nd nearest neighbors) since both two types of edits involve removing paired parentheses. Surprisingly, we found that the graph-based edit encoder does not outperform the sequence-based encoder. However, we observe that the graph edit encoder in many cases tends to better capture high-level and abstract structural edit patterns. Example 2 in Tab. 3 showcases a seed edit that swaps two consecutive declarations, which corresponds to swapping the intermediate Expression nodes representing each statement on the underlying AST. In this case, the graph edit encoder is capable of grouping semantically similar edits, while it seems to be more difficult for the sequential encoder encoder to capture the edit pattern. On the other hand, we found that the graph edit encoder often fails to capture simpler, lexical level edits (e.g., Example 1). This might suggest that terminal node information is not effectively propagated, an interesting issue worth future investigation. 4.4 PRECISION OF NEURAL EDITORS Finally, we evaluate the performance of our end-to-end system by predicting the edited inputx+ given x− and the edit representation. We are interested in answering two research questions: First, how well can our neural editors generate x+ given the gold-standard edit representation f∆(x−,x+)? Second, and perhaps more interestingly, can we use the representation of a similar edit f∆(x′−,x ′ +) to generate x+ by applying that edit to x− (i.e. x̂+ = α(x−, f∆(x′−,x ′ +)))? To answer the first question, we trained our neural editor models on the WikiAtomicEdits and the GitHubEdits dataset, and evaluate the performance of encoding and applying edits on test sets. For completeness, we also evaluated the performance of our neural editor models with a simple “Bag-ofEdits” edit encoding scheme, where f∆(x−,x+) is modeled as the concatenation of two vectors, each representing the sum of the embeddings of added and deleted tokens in the edit, respectively. This edit encoding method is reminiscent of the model used in Guu et al. (2017) for solving a different task of language modeling by marginalizing over latent edits, which we will elaborate in Sect. 5. Tab. 4 lists the evaluation results. With our proposed sequence- and graph-based edit encoders, our neural editor models achieve reasonable end-to-end performance, surpassing systems using bag-of-edits representations. This is because many edits are context-sensitive and position-sensitive, requiring edit representation models that go beyond the bag-of-edits scheme to capture those effects (more analysis is included in Appendix B). Interestingly, on the GitHubEdits dataset, we find that the Seq2Seq editor with sequential edit encoder registers the best performance. However, it should be noted that in this set of experiments, we encode the gold-standard edit f∆(x−,x+) to predict x+. As we will show later, better performance with the gold-standard edit does not necessarily imply better (more generalizable) edit representation. Nevertheless, we hypothesize that the higher accuracy of the Seq2Seq edit is due to the fact that a significant proportion of edits in this dataset is small and primarily syntactically simple. Indeed we find that 69% of test examples have a token-level edit distance of less than 5. To answer the second question, we use the trained neural editors from the previous experiment, and test their performance in a “one-shot” transfer learning scenario. Specifically, we use our high-quality C#Fixers dataset, and for each fixer category F of semantically similar edits, we randomly select a seed edit {x′− → x′+} ∈ F , and use its edit representation f∆(x′−,x′+) to predict the updated code for all examples in F , i.e., we have x̂+ = α(x−, f∆(x′−,x′+)),∀ {x− → x+} ∈ F . This task is highly non-trivial, since a fixer category could contain more than hundreds of edit examples collected from different C# projects. Therefore, it requires the edit representations to generalize and transfer well, while being invariant of local lexical information like specific method names. To make the experimental evaluation more robust to noise, for each fixer category F , we randomly sample 10 seed edit pairs {x′− → x′+}, compute their edit representations and use them to predict the edited version of the examples in F and evaluate accuracy of predicting the exact final version. We then report the best score among the 10 seed representations as the performance metric on F . Tab. 5 summarizes the results and also reports the upper bound performance when using the goldstandard edit representation f∆(x−,x+) to predict x+, and an approximation of the “lower bound” accuracies using pre-trained Seq2Seq and Graph2Tree models without edit encoders. We found that our neural Graph2Tree editor with the sequential edit encoder significantly outperforms the Seq2Seq editor, even though Seq2Seq performs better when using gold-standard edit representations. This suggest that the edit representations learned with the Graph2Tree model generalize better, especially for edits discussed in Sect. 4.2 that involve syntactic variations like RCS1021 (lambda expression simplification, 7.8% vs. 30.7% for Seq2Seq and Graph2Tree), and RCS1207 (change lambdas to method groups, 7.1% vs. 26.2%). Interestingly, we also observe that Seq2Seq outperforms the Graph2Tree model for edits with trivial surface edit sequences, where the Graph2Tree model does not have a clear advantage. For example, on RCS1015 (use nameof operator, e.g. Exception("x")→ Exception(nameof(x))), the accuracies for Seq2Seq and Graph2Tree are 40.0% (14/35) and 28.6% (10/35), resp. We include more analysis of the results in Appendix C. 5 RELATED WORK Edits have recently been considered in NLP, as they represent interesting linguistic phenomena in language modeling and discourse (Faruqui et al., 2018; Yang et al., 2017a). Specifically, Guu et al. (2017) present a generative model of natural language sentences via editing prototypes. Our work shares with Guu et al. (2017) in that (1) the posterior edit encoding model in Guu et al. (2017) is similar to our baseline “bag-of-edits” encoder in Sec. 4.4, and (2) the sequence-to-sequence sentence generation model given the prototype and edit representation is reminiscent of our Seq2Seq editor. In contrast, our work directly focuses on discriminative learning of representing edits and applying the learned edits for both sequential (NL) and structured (code) data. Another similar line of research is “retrieve-and-edit” models for text generation (Hashimoto et al., 2018), where given an input data x, the target prediction y is generated by editing a similar target y′ that is retrieved based on the similarity between its source x′ and the input x. While these models typically require an “editor” component to generate the output by exploiting the difference between similar inputs, they usually use the simpler bag-of-edits representations (Wu et al., 2019), or implicitly capture it via end-to-end neural networks (Contractor et al., 2018). To our best knowledge, there is not any related work that classifies or otherwise explicitly represents the differences over similar input, with the exception of differential recurrent neural networks used for action recognition in videos (Veeriah et al., 2015; Zhuang et al., 2018). This is a substantially different task, as the data includes a temporal component as well. Source code edits are a widely studied artifact. Specialized software, such as git, is widely used to store source code revision histories. Nguyen et al. (2013) studied the repetitiveness of source code changes by identifying identical types of changes using a deterministic differencing tool. In contrast, we employ on a neural network to cluster similar changes together. Rolim et al. (2017) use such clusters to synthesize small programs that perform the edit. The approach is based on Rolim et al. (2018) extract manually designed syntactic features from code and cluster over multiple changes to find repeatable edit rules. Similarly, Paletov et al. (2018) extract syntactic features specifically targeting edits in cryptography API protocols. In this work, we try to avoid hand-designed features and allow a neural network to learn the relevant aspects of a change by directly giving as input the original and final version of a changed code snippet. Modeling tree generation with machine learning is an old problem that has been widely studied in NLP. Starting with Maddison & Tarlow (2014), code generation has also been considered as a tree generation problem. Close to our work is the decoder of Yin & Neubig (2017) which we use as the basis of our decoder. The work of Chen et al. (2018) is also related, since it provides a tree-to-tree model, but focuses on learning a single translation tasks and cannot be used directly to represent multiple types of edits. Both Yin & Neubig (2017) and Chen et al. (2018) have copying mechanism for single tokens, but our subtree copying mechanism is novel. Autoencoders (see Goodfellow et al. (2016) for an overview) have a long history in machine learning. Variational autoencoders (Kingma & Welling, 2013) are similar to autoencoders but instead of focusing on the learned representation, they aim to create accurate generative probabilistic models. Most (variational) autoencoders focus on encoding images but there have been works that autoencode sequences, such as text (Dai & Le, 2015; Bowman et al., 2015; Yang et al., 2017b) and graphs (Simonovsky & Komodakis, 2018; Liu et al., 2018). Conditional variational autoencoders (Sohn et al., 2015) have a related form to our model (with the exception of the KL term), but are studied as generative models, whereas we are primarily interested in the edit representation. 6 DISCUSSION & CONCLUSIONS In this work, we presented the problem of learning distributed representation of edits. We believe that the dataset of edits is highly relevant and should be studied in more detail. While we have presented a set of initial models and metrics on the problem and obtained some first promising results, further development in both of these areas is needed. We hope that our work inspires others to work on this interesting problem in the future. ACKNOWLEDGMENTS We would like to thank Rachel Free for her insightful comments and suggestions. A DATASETS AND CONFIGURATION WikiAtomicEdits We randomly sampled 1040K insertion examples from the English portion of WikiAtomicEdits (Faruqui et al., 2018) dataset, with a train, development and test splits of 1000K, 20K and 20K. GitHubEdits We cloned the top 54 C# GitHub repositories based on their popularity (Tab. 8). For each commit in the master branch, we collect the previous and updated versions of the source code, and extract all consecutive lines of edits that are smaller than three lines, and with at least three preceding and successive lines that have not been changed. We then filter trivial changes such as variable and identifier renaming, and changes happened within comments. We also limit the number of tokens for each edit to be smaller than 100, and down-sample edits whose frequency is larger than 30. Finally, we split the dataset by commit ids, ensuring that there are no edits in the training and testing (development) sets coming from the same commit. Tab. 6 lists some statistics of the dataset. C#Fixers We selected 16 C# fixers from Roslyn4 and Roslynator5, and ran them on 6 C# projects to generate a small, high-quality C# fixers dataset of 2 878 edit pairs with known semantics. Table 7 lists the detailed descriptions for each fixer category. And more information can be found at https://github.com/JosefPihrt/Roslynator/blob/master/ src/Analyzers/README.md. Network Configuration Throughout the experiments, we use a fixed edit representation size of 512. The dimensionality of word embedding, the hidden states of the encoder LSTMs, as well as the gated graph neural network is 128, while the decoder LSTM uses a larger hidden size of 256. For the graph-based edit encoder, we used a two-layer graph neural network, with 5 information propagation steps at each layer. During training, we performed early stopping, and choose the best model based on perplexity scores on development set. During testing, we decode a target element x+ using a beam size of 5. 4http://roslyn.io 5https://github.com/JosefPihrt/Roslynator B CLUSTERING EXPERIMENTS To qualitatively evaluate the quality of the learned edit representations. We use the models trained on the WikiAtomicEdits and GitHubEdits datasets to cluster natural language and code edits. We run K-Means clustering algorithm on 0.5 million sampled edits from WikiAtomicEdits, and all 90K code edits from GitHubEdits, producing 50 000 and 20 000 clusters for each dataset. Tab. 9 and Tab. 10 list some example clusters on WikiAtomicEdits and GitHub datasets, respectively. Due to the size of clusters, we omit out-liners and present distinctive examples from each cluster. On the WikiAtomicEdits dataset, we found clusters whose examples are semantically and syntactically similar. More interestingly, on the source code data, we find representative clusters that relate to idiomatic patterns and best practices of programming. The clustering results produced by our model would be useful for programming synthesis toolkits to generate interpretable code refractory rules, which we leave as interesting future work. Finally, we remark that the clustering results indicate that the encoding of edits is context-sensitive and position-sensitive for both natural language and source code data. For instance, the WikiAtomicEdits examples we present in Tab. 9 clearly indicate that semantically similar insertions also share similar editing positions. This is even more visible in code edits (Tab. 10). For instance, in the first example in Tab. 10, Equal() can be changed to Empty() only in the Assert namespace (i.e., the context). These examples demonstrate that it is important for an edit encoder to capture the contextual and positional information in edits, a property that cannot be captured by simple “bag-of-edits” edit representation methods. C BREAK-DOWN ANALYSIS OF TRANSFER LEARNING RESULTS D IMPACT OF TRAINING SET SIZE To evaluate the data efficiency of our proposed approach, we tested the end-to-end performance of our neural editor model (Sect. 4.4, Tab. 4) with varying amount of training data. Tab. 12 lists the results. We found both Graph2Tree and Seq2Seq editors are relatively data efficient. They registered around 90% of the accuracies achieved using the full training set with only 60% of the training data. E DETAILS OF HUMAN EVALUATION As discussed in Sect. 4.2, we performed human evaluation to rate the qualities of neighboring edits given a seed edit. The annotation instructions on GithubEdits and WikiAtomicEdits datasets are listed below. The annotation was carried out by three authors of this paper, and we anonymized the source of systems that generated the output. The three-way Fleiss’ kappa inter-rater agreement is κ = 0.55, which shows moderate agreement (Artstein & Poesio, 2008), an agreement level that is also used in other annotation tasks in NLP (Faruqui & Das, 2018). Rating 2 Semantically and Syntactically Equivalent The changed constituents in the seed edit and the neighboring edit are applied to the similar positions of the original sentence, serving the same syntactic and semantic role. For example, Examples • Seed Edit x− var V0 = V1.Where(V2 => V2.Name == LITERAL).Single(); x+ var V0 = V1.Single(V2=> V2.Name == LITERAL); • Neighbor x− var V0 = V1.GetMembers().Where(V2 => V2.Kind == SymbolKind.Property).Single(); x+ var V0 = V1.GetMembers().Single(V2 => V2.Kind == SymbolKind.Property); • Seed Edit x− Type V0 = V1 == null ? typeof(object) : V1.GetType(); x+ Type V0 = V1?.GetType() ?? typeof(object); • Neighbor x− string V0 = V1 == null ? string.Empty : VAR1.ToString(); x+ string V0 = V1?.ToString() ?? string.Empty; • Seed Edit x− Assert.True(Directory.Exists(V0) == V1); x+ Assert.Equal(Directory.Exists(V0), V1); • Neighbor x− Assert.True(V0.GetString(V0.GetBytes(LITERAL)) == V1.ContainingAssembly.Identity.CultureName); x+ Assert.Equal(V0.GetString(VAR0.GetBytes(LITERAL)), V1.ContainingAssembly.Identity.CultureName); Rating 1 Syntactically or Semantically Related The seed and neighboring edits share functionally or syntactically similar patterns. Examples The following edit is a related edit of the first example above, as it applies the same simplification (.Where(COND).Func() to .Func(COND)), but for FirstOrDefault instead of Single: • Seed Edit x− var V0 = V1.Where(V2 => V2.Name == LITERAL).Single(); x+ var V0 = V1.Single(V2=> V2.Name == LITERAL); • Neighbor x− var V0 = V1.Where(V2 => V3.ReportsTo == V2.EmployeeID).FirstOrDefault(); x+ var V0 = V1.FirstOrDefault(V2 => V3.ReportsTo == V2.EmployeeID); The following edit is a related edit of the second example above, as it also replaces a ternary expression for null checking with the ?. and ?? operators: • Seed Edit x− Type V0 = V1 == null ? typeof(object) : V1.GetType(); x+ Type V0 = V1?.GetType() ?? typeof(object); • Neighbor x− var V0 = V1 != null ? V1.ToList() : new List<TextSpan>(); x+ var V0 = V1?.ToList() ?? new List<TextSpan>(); We also considered pairs such as the following related, since they share similar syntactic structure • Seed Edit x− V0.State = V1; x+ V0.SetState(VAR1); • Neighbor x− V0.Quantity = V1; x+ V0.SetQuantity(V1); Rating 0 Not Related The seed and neighboring edits are not related based on the above criteria. Table 14: Annotation Instruction for WikiAtomEdits Data Rating 2 Semantically and Syntactically Equivalent The changed constituents in the seed edit and the neighboring edit are applied to the similar positions of the original sentence, serving the same syntactic and semantic role. For example, Seed Edit Neighbor chaz guest ( born I1961J) was born in niagra falls , . . . , a decorated hero in wwii in europe , including the purple heart . randal l. schwartz ( born november 22 , I1961J) , also known as merlyn , is an american author , system administrator and programming consultant. he was elected to donegal county council for sinn fin in 1979 , and held his seat until his death Iat age 56J . davis graduated from high school in january 1947 , immediately enrolling at wittenberg college in rural ohio Iat age 17J . IdrorJ feiler served as a paratrooper in the israel defense forces . InagaurJ fort - sandy fort ; centrally located ; 2nd century old ; witnessed many battles ; lofty walls & spacious campus ; having many palaces & temples inside . the original old bay house , home of the chief factor , still exists Iand is now part of the fort vermilion national historic siteJ . the population was 6,400 at the 2010 census Iand is part of the st. louis metropolitan areaJ . Rating 1 Syntactically Related The changed constituents in the seed and the neighboring edit are applied to the similar positions of the original sentence, and they play similar syntactic roles. This includes examples like adding a disfunction, adding a complement, prepositional clause or other syntactic constructs with similar phrases or language structures. For example, Seed Edit Neighbor the douro fully enters portuguese territory just after the confluence with the gueda river ; once the douro enters portugal , major population centres are less frequent Ialong the riverJ . she made a brief return to the screen in ” parrish ” ( 1961 ) , playing the supporting role of mother which received little attention Iby the pressJ . when they found it , they discovered a group of pagumon living there instead who immediately proceeded to treat the digidestined as honored guests I, saying that pagumon are the fresh form of koromonJ . in 2012 slote and his baseball book ” jake ” were the subject of an espn ( 30 for 30 ) short documentary in which slote describes his writing process and reads from the book I, saying it is his best writingJ . the aircraft was intended to be Icertified andJ supplied as a complete ready - to - fly - aircraft for the flight training and aerial work markets . in june reinforcements finally did arrive when Iprovincial andJ militia units from new york , new jersey , and new hampshire were sent up from fort edward by general daniel webb . Rating 0 Not Related The seed and neighboring edits are not related based on the above criteria.
1. What are the strengths and weaknesses of the proposed approach regarding its novelty, importance, and ability to capture the structure of edits? 2. How effective is the created source code edit dataset in supporting the study of the new task, and what are the limitations of the current implementation? 3. To what extent do the models succeed in capturing the semantics of edits, and how might the end-to-end system be improved to address areas of weak performance? 4. Are there any concerns regarding the input scheme of the neural editor, and how might it be modified to better support automated editing tasks? 5. What additional steps could be taken to evaluate the effectiveness of the edit encoders and ensure that they are capturing meaningful patterns in the data?
Review
Review The authors state nicely and clearly the main contributions they see in their work (Intro, last paragraph). Specifically the state the paper: 1) present a new and important machine learning task, 2) present a family of models that capture the structure of edits and compute efficient representations, 3) create a new source code edit dataset, 4) perform a set of experiments on the learned edit representations and present promising empirical evidence that the models succeed in capturing the semantics of edits. We decided to organize this review by commenting on the above-stated contributions one at a time: “A new and important machine learning task” Regarding “new task”: PRO: We are unfamiliar with past work which presents this precise task; the task is new. Section 5 makes a good case for the novelty of this work. CON: None. Regarding “important task”: PRO: The authors motivate the task with tantalizing prospective applications-- automatically editing text and code, e.g. for grammar, clarity, and style. Conceptualizing edits as NLP objects of interest that can be concretely represented, clustered, and used for prediction is an advance. CON: Many text editors, office suites, and coding IDEs already include features which automatically suggest or apply edits for grammar, clarity, and style. The authors do not describe shortcomings in existing tools that might be better addressed using distributed representations of edits. Consequently, the significance of the proposed contribution is unclear. “A family of models that capture the structure of edits and compute efficient representations” Regarding “a family of models”: PRO: The family of models presented by the authors clearly generalizes: such models may be utilized for computational experiments on datasets and edit types beyond those specifically utilized in this evaluation. The authors apply well-utilized neural network architectures that may be trained and applied to large datasets. The architecture of the neural editor permits evaluation of the degree to which the editor successfully predicts the correct edit given a pre-edit input and a known representation of a similar edit. CON: The authors do not propose any scheme under which edit representations might be utilized for automatically editing text or code when an edit very similar to the desired edit is not already known and its representation available as input. Hence, we find the authors do not sufficiently motivate the input scheme of their neural editor. The input scheme of the neural editor makes trivial the case in which no edit is needed, as the editor would learn during training that the output x+ should be the same as the input x- when the representation of the “zero edit” is given as input. While the authors discuss the importance of “bottlenecking” the edit encoder so that it does not simply learn to encode the desired output x+, they do not concretely demonstrate that the edit encoder has done otherwise in the final experiments. Related to that: If the authors aimed to actually solve automated edits in text/code then it seems crucial their data contained "negative examples" i.e. segments which require no edits. In such an evaluation one would test also when the algorithm introduces unnecessary/erroneous edits. Regarding “capture structure of edits”: PRO: The authors present evidence that edit encoders tightly cluster relatively simple edits which involve adding or removing common tokens. The authors present evidence that relatively simple edits completed automatically by a “fixer” often cluster together, i.e. a known signal is retained in clustering. The authors present evidence that the nearest neighbors of edits in an edit-representation space often are semantically or structurally similar, as judged by human annotators. Section 4.3 includes interesting observations comparing edit patterns better captured by the graph or seq edit encoders. CON: The details of the human annotation tasks which generated the numerical results in Tables 1 and 2 are unclear: were unbiased third parties utilized? Were the edits stripped of their source-encoder label when evaluated? Objectively, what separates an “unrelated” from a “similar” edit, and what separates a “similar” from a “same” edit? Did multiple human annotators undertake this task in parallel, and what was their overall concordance (e.g. “intercoder reliability”)? Without concrete answers to these questions, the validity and significance of the DCG/NDCG results reported in Tables 1 and 2 are unclear. It is not clear from the two examples given in Table 1 that the three nearest neighbors embedded by the Seq encoder are “better”, i.e. overall more semantically and/or syntactically similar to the example edit, than those embedded by the Bag of Words model. It is unclear which specific aspects of “edit structure” are better captured by the Seq encoder than the Bag of Words model. The overall structure of Tables 1 and 2 is awkward, with concrete numerical results dominated by a spatially large section containing a small number of examples. “create a new source code edit dataset” PRO: The authors create a new source code edit dataset, an important contribution to the study of this new task. CON: Minor: is the provided dataset large enough to do more than simple experiments? See note below on sample size. “present promising empirical evidence that the models succeed in capturing the semantics of edits” PRO: The experiment results show how frequently the end-to-end system successfully predicted the correct edit given a pre-edit input and a known representation of a similar edit. Gold standard accuracies of more than 70%, and averaged transfer learning accuracies of more than 30%, suggest that this system shows promise for capturing the semantics of edits. CON: Due to concerns expressed above about the model design and evaluation of the edit representations, it remains unclear to what degree the models succeed in capturing the semantics of edits. Table 11 shows dramatic variation in success levels across fixer ID in the transfer learning task, yet the authors do not propose ways their end-to-end system might be adjusted to address areas of weak performance. The authors do not discuss the impact of training set size on their evaluation metrics. The authors do not discuss the degree to which their model training task would scale to larger language datasets such as those needed for the motivating applications. ############## Based on the authors' response, revisions, and disucssions we have updated the review and the score.
ICLR
Title Learning to Represent Edits Abstract We introduce the problem of learning distributed representations of edits. By combining a “neural editor” with an “edit encoder”, our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem. 1 INTRODUCTION One great advantage of electronic storage of documents is the ease with which we can edit them, and edits are performed in a wide variety of contents. For example, right before a conference deadline, papers worldwide are finalized and polished, often involving common fixes for grammar, clarity and style. Would it be possible to automatically extract rules from these common edits? Similarly, program source code is constantly changed to implement new features, follow best practices and fix bugs. With the widespread deployment of (implicit) version control systems, these edits are quickly archived, creating a major data stream that we can learn from. In this work, we study the problem of learning distributed representations of edits. We only look at small edits with simple semantics that are more likely to appear often and do not consider larger edits; i.e., we consider “add definite articles” rather than “rewrite act 2, scene 3.” Concretely, we focus on two questions: i) Can we group semantically equivalent edits together, so that we can automatically recognize common edit patterns? ii) Can we automatically transfer edits from one context to another? A solution to the first question would yield a practical tool for copy editors and programmers alike, automatically identifying the most common changes. By leveraging tools from program synthesis, such groups of edits could be turned into interpretable rules and scripts (Rolim et al., 2017). When there is no simple hard rule explaining how to apply an edit, an answer to the second question would be of great use, e.g., to automatically rewrite natural language following some stylistic rule. We propose to handle edit data in an autoencoder-style framework, in which an “edit encoder” f∆ is trained to compute a representation of an edit x− → x+, and a “neural editor” α is trained to construct x+ from the edit representation and x−. This framework ensures that the edit representation is semantically meaningful, and a sufficiently strong neural editor allows this representation to not be specific to the changed element. We experiment with various neural architectures that can learn to represent and apply edits and hope to direct the attention of the research community to this new and interesting data source, leading to better datasets and stronger models. Briefly, the contributions of our paper are: (a) in Sect. 2, we present a new and important machine learning task on learning representations of edits (b) we present a family of ∗Work done as an intern in Microsoft Research, Cambridge, UK. models that capture the structure of edits and compute efficient representations in Sect. 3 (c) we create a new source code edit dataset, and release the data extraction code at https://github.com/Microsoft/msrc-dpu-learning-to-represent-edits and the data at http://www.cs.cmu.edu/˜pengchey/githubedits.zip. (d) we perform a set of experiments on the learned edit representations in Sect. 4 for natural language text and source code and present promising empirical evidence that our models succeed in capturing the semantics of edits. 2 TASK In this work, we are interested in learning to represent and apply edits on discrete sequential or structured data, such as text or source code parse trees1. Figure 1 gives a graphical overview of the task, described precisely below. Edit Representation Given a dataset of edits {x(i)− → x (i) + }Ni=1, where x (i) − is the original version of some object and x(i)+ its edited form (see upper half of Figure 1 for an example), our goal is to learn a representation function f∆ that maps an edit operation x− → x+ to a real-valued edit representation f∆(x−,x+) ∈ Rn. A desired quality of f∆ is for the computed edit representations to have the property that semantically similar edits have nearby representations in Rn. Having distributed representations also allows other interesting downstream tasks, e.g., unsupervised clustering and visualization of similar edits from large-scale data (e.g. the GitHub commit stream), which would be useful for developing human-assistance toolkits for discovering and extracting emerging edit patterns (e.g. new bug fixes or emerging “best practices” of coding). Neural Editor Given an edit representation function f∆, we want to learn to apply edits in a new context. This can be achieved by learning a neural editor α that accepts an edit representation f∆(x−,x+) and a new input x′− and generates x ′ +. 2 This is illustrated in the lower half of Figure 1. 3 MODEL We cast the edit representation problem as an autoencoding task, where we aim to minimize the reconstruction error of α for the edited version x+ given the edit representation f∆(x−,x+) and the original version x−. By limiting the capacity of f∆’s output and allowing the model to freely use information about x−, we are introducing a “bottleneck” that forces the overall framework to not simply treat f∆(x−,x+) as an encoder of x+. The main difference from traditional autoencoders is that in our setup, an optimal solution requires to re-use as much information as possible from x− 1Existing editing systems, e.g. the grammar checker in text editors and code refactoring module in IDEs, are powered by domain-specific, manually crafted rules, while we aim for a data-driven, domain-agnostic approach. 2We leave the problem of identifying which edit representation f∆(x−,x+) to apply to x′− as interesting future work. to make the most of the capacity of f∆. Formally, given a probabilistic editor function Pα such as a neural network and a dataset {x(i)− → x (i) + }Ni=1, we seek to minimize the negative likelihood loss L = − 1 N ∑ i logPα(x+ | x−, f∆(x−,x+)). Note that this loss function can be interpreted in two ways: (1) as a conditional autoencoder that encodes the salient information of an edit, given x− and (2) as an encoder-decoder model that encodes x− and decodes x+ conditioned on the edit representation f∆(x−,x+). In the rest of this section, we discuss our methods to model Pα and f∆ as neural networks. 3.1 NEURAL EDITOR As discussed above, α should use as much information as possible from x−, and hence, an encoderdecoder architecture with the ability to copy from the input is most appropriate. As we are primarily interested in edits on text and source code in this work, we explored two architectures: a sequenceto-sequence model for text, and a graph-to-tree model for source code, whose known semantics we can leverage both on the encoder as well as on the decoder side. Other classes of edits, for example, image manipulation, would most likely be better served by convolutional neural models. Sequence-to-Sequence Neural Editor First, we consider a standard sequence-to-sequence model with attention (over the tokens of x−). The architecture of our sequence-to-sequence model is similar to that of Luong et al. (2015), with the difference that we use a bidirectional LSTM in the encoder and a token-level copying mechanism (Vinyals et al., 2015) that directly copies tokens into the decoded sequence. Whereas in standard sequence-to-sequence models the decoder is initialized with the representation computed by the encoder, we initialize it with the concatenation of encoder output and the edit representation. We also feed the edit representation as input to the decoder LSTM at each decoding time step. This allows the LSTM decoder to take the edit representation into consideration while generating the output sequence. Graph-to-Tree Neural Editor Our second model aims to take advantage of the additional structure of x− and x+. To achieve this, we combine a graph-based encoder with a tree-based decoder. We use T (x) to denote a tree representation of an element, e.g., the abstract syntax tree (AST) of a fragment of source code. We extend T (x) into a graph form G(x) by encoding additional relationships (e.g., the “next token” relationship between terminal nodes, etc.) (see Figure 2(a)). To encode the elements of G(x−) into vector representations, we use a gated graph neural network (GGNN) (Li et al., 2015). Similarly to recurrent neural networks for sequences (such as biRNNs), GGNNs compute a representation for each node in the graph, which can be used in the attention mechanisms of a decoder. Additionally, we use them to obtain a representation of the full input x−, by computing their weighted average following the strategy of Gilmer et al. (2017) (i.e., computing a score for each node, normalizing scores with a softmax, and using the resulting values as weights). Our tree decoder follows the semantic parsing model of Yin & Neubig (2018), which sequentially generate a tree T (x+) as a series of expansion actions a1 . . . aN . The probability of taking an action is modeled as p(at | a<t, s), where s is the input (a sequence of words in the original semantic parsing setting) and a<t is the partial tree that has been generated so far. The model of Yin & Neubig (2018) mainly uses two types of actions: EXPANDR expands the current non-terminal using a grammar rule, and GENTERM generates a terminal token from a vocabulary or copies a token from s3. The dependence on the partial tree a<t is modeled by an LSTM cell which is used to maintain state throughout the generation procedure. Additionally, the LSTM receives the decoder state used to pick the action at the parent node as an additional input (“parent-feeding”). This process illustrated in Figure 2(b). We extend this model to our setting by replacing the input sequence s by x−; concretely, we condition the decoder on the graph-level representation computed for G(x−). Additionally, we use the change representation f∆(·) as an additional input to the LSTM initial state and at every decoding step. Based on the observation that edits to source code often manipulate the syntax tree by moving expressions around (e.g. by nesting statements in a conditional, or renaming a function while keeping its arguments), we extend the decoding model of Yin & Neubig (2018) by adding a facility to copy entire subtrees from the input. For this, we add a decoder action TREECP. This action is similar to standard copying mechanism known from pointer networks (Vinyals et al., 2015), but instead of copying only a single token, it copies the whole subtree pointed to. However, adding the TREECP action means that there are many correct generation sequences for a target tree. This problem appears in token-copying as well, but can be easily circumvented by marginalizing over all correct choices at each generation step (by normalizing the probability distribution over allowed actions to sum up those that have the same effect). In the subtree-copying setting, the lengths of action sequences representing different choices may differ. In our implementation we handle this problem during training by simpling picking the generation sequence that greedily selecting TREECP actions. 3.2 EDIT REPRESENTATION To compute a useful edit representation, a model needs to focus on the differences between x− and x+. A risk in our framework is that f∆ degenerates into an encoder for x+, turning α into a decoder. To avoid this, we need to follow the standard autoencoder trick, i.e. it is important to limit the capacity of the result of f∆ by generating the edit representation in a low-dimensional space RN . This acts as a bottleneck and encodes only the information that is needed to reconstruct x+ from x−. We again experimented with both sequence-based and graph-based representations of edits. Sequence Encoding of Edits Given x− (resp. x+) as sequence of tokens t (0) − , . . . t (T−) − (resp. t (0) + , . . . t (T+) + ), we can use a standard (deterministic) diffing algorithm to compute an alignment of tokens in the two sequences. We then use extra symbols ∅ for padding, + for additions, − for deletions,↔ for replacements, and= for unchanged tokens to generate a single sequence representing both x− and x+. This is illustrated in Figure 3(a). By embedding the three entries in each element of the sequence separately and concatenating their representation, they can be fed into a standard sequence encoder whose final state is our desired edit representation. In this work, we use a biLSTM. 3EXPANDR corresponds to the APPLYCONSTR action in the original model of Yin & Neubig (2018). There is also a REDUCE action which marks the end of expanding a non-terminal with non-deterministic number of child nodes. See Yin & Neubig (2018) for details. Graph Encoding of Edits As in the graph-to-tree neural editor, we represent x− and x+ as trees T (x−) and T (x+). We combine these trees into a graph representation G(x− → x+) by merging both trees into one graph, using “Removed”, “Added” and “Replaced” edges. To connect the two trees, we compute the same alignment as in the sequence case, connecting leaves that are the same and each replaced leaf to its replacement. We also propagate this information up in the trees, i.e., two inner nodes are connected by “=” edges if all their descendants are connected by “=” edges. This is illustrated in Figure 3(b). Finally, we also use the same “+” / “-” / “↔” / “=” tags for the initial node representation, computing it as the concatenation of the string label (i.e. token or nonterminal name) and the embedding of the tag. To obtain an edit representation, we use a GGNN unrolled for a fixed number of timesteps and again use the weighted averaging strategy of Gilmer et al. (2017). 4 EVALUATION Evaluating an unsupervised representation learning method is challenging, especially for a newly defined task. Here, we aim to evaluate the quality of the learned edit representations with a series of qualitative and quantitative metrics on natural language and source code. 4.1 DATASETS AND CONFIGURATION Natural Language Edits We use the WikiAtomicEdits (Faruqui et al., 2018) dataset of pairs of short edits on Wikipedia articles. We sampled 1040K edits from the English insertion portion of the dataset and split the samples into 1000K/20K/20K train-valid-test sets. Source Code Edits To obtain a dataset for source code, we clone a set of 54 C# projects on GitHub and collected a GitHubEdits dataset (see Appendix A for more information). We selected all changes in the projects that are no more than 3 lines long and whose surrounding 3 lines of code before and after the edited lines have not been changed, ensuring that the edits are separate and short. We then parsed the two versions of the source code and take as x− and x+ the code that belongs to the top-most AST node that contains the edited lines. Finally, we remove trivial changes such variable renaming, changes within comments or formatting changes. Overall, this yields 111 724 edit samples. For each edit we run a simple C# analysis to detect all variables and normalize variable names such that each unique variable within x− and x+ has a unique normalized name V0, V1, etc. This step is necessary to avoid the sparsity of data induced by the variety of different identifier naming schemes. We split the dataset into 91,372 / 10,176 / 10,176 samples as train/valid/test sets. Additionally, we introduce a labeled dataset of source code edits by using C# “fixers”. Fixers are small tools built on top of the C# compiler, used to perform common refactoring and modernization tasks (e.g., using new syntactic sugar). We selected 16 of these fixers and ran them on 6 C# projects to generate a small C#Fixers dataset of 2,878 edit pairs with known semantics. We present descriptions and examples of each fixer in Appendix A. Configuration Throughout the evaluation we use a fixed size of 512 for edit representations. The size of word embeddings and hidden states of encoding LSTMs is 128. The dimensionality of the decoding LSTM is set to 256. Details of model configuration can be found in Sect. A. When generating the target x+, our neural editor model can optionally take as input the context of the original input x− (e.g., the preceding and succeeding code segments surrounding x−), whose information could be useful for predicting x+. For example, in source code edits the updated code snippet x+ may reuse variables defined in the preceding snippet. In our code experiments, we use a standard bidirectional LSTM network to encode the tokenized 3 lines of code before and after x− as context. The encoded context is used to initialize the decoder, and as an additional source for the pointer network to copy tokens from. 4.2 QUALITY OF EDIT REPRESENTATIONS First, we study the ability of our models to encode edits in a semantically meaningful way. Visualizing Edits on Fixers Data In a first experiment, we train our sequential neural editor model on our GitHubEdits data and then compute representations for the edits generated by the C# fixers. A t-SNE visualization (Maaten & Hinton, 2008) of the encodings is shown in Figure 4. For this visualization, we randomly selected 100 examples from the edits of each fixer (if that fixer has more than 100 samples) and discarded fixer categories with less than 40 examples. Readers are referred to Appendix A for detailed descriptions of each fixer category. We find that our model produces dense clusters for simple or distinctive code edits, e.g. fixer RCS1089 (using the ++ or -- unary operators instead of a binary operator (e.g., i = i + 1 → i++), and fixer CA2007 (adding .ConfigureAwait(false) for await statements). We also analyzed cases where (1) the edit examples from the same fixer are scattered, or (2) the clusters of different fixers overlap with each other. For example, the fixer RCS1077 covers 12 different aspects of optimizing LINQ method calls (e.g., type casting, counting, etc.), and hence its edits are scattered. On the other hand, fixers RCS1146 and RCS1206 yield overlapping clusters, as both fixers change code to use the ?. operator. Fixers RCS1207 (change a lambda to a method group, e.g. foo(x=>bar(x)) → foo(bar)) and RCS1021 (simplify lambda expressions, e.g. foo(x=>{return 4;}) → foo(x=>4)) are similar, as both inline lambda expressions in two different ways. Analysis yields that the representation is highly dependent on surface tokens. For instance, IDE004 (removing redundant type casts, e.g. (int)2 → 2) and RCS1207 (removing explicit argument lists) yield overlapping clusters, as both involve deleting identifiers wrapped by parentheses. Human Evaluation on Encoding Natural Language WikiAtomicEdits In a second experiment, we test how well neighborhoods in edit representation space correspond to semantic similarity. We computed the five nearest neighbors of 200 randomly sampled seed edits from our training set, using both our trained sequence-to-sequence editing model with sequential edit encoder, as well as a simple bag-of-words baseline based on TF-IDF scores. We then rated the quality of the retrieved neighbors on a scale of 0 (“unrelated edit”), 1 (“similar edit”) and 2 (“semantically or syntactically same edit”). Details of the annotation schema is included in Sect. E. We show the (normalized) discounted cumulative gain (DCG, Manning et al. (2008)) for the two models at the top of Tab. 1 (higher is better). The relevance scores indicate that our neural model clearly outperforms the simplistic baseline. Tab. 1 also presents two example edits with their nearest neighbors. Example 1 shows that the neural edit models succeeded in representing syntactically and semantically similar edits, while the bag-of-words baseline relies purely on surface token overlap. Interestingly, we also observed that the edit representations learned by the neural editing model on WikiAtomicEdits are somewhat sensitive to position, i.e. the position of the inserted tokens in both the seed edit and the nearest neighbors is similar. This is illustrated in Example 2, where the second (“senegalese striker”) and the third (“republican incumbent”) nearest neighbors returned by the neural model have similar editing positions as the seed edit, while they are semantically diverse. 4.3 EDIT ENCODER PERFORMANCE To evaluate the performance of our two edit encoders discussed in Sect. 3.2 and disentangle it from the choice of neural editor, we train various combinations of our neural editor model and manually evaluate the quality of the edit representation. More specifically, we trained our neural editor models on GitHubEdits and randomly sampled 200 seed edits and computed their 3 nearest neighbors using each end-to-end model. We then rated the resulting groups using the same 0-2 scale as above. The resulting relevance scores are shown in Tab. 2. Bag of Words Model Seq2Seq – Seq Edit Encoder DCG/NDCG@5 9.3 / 67.3% 13.5 / 90.3% DCG@5 (by edit size) 1: 14.7 2-3: 10.8 >3: 5.4 1: 16.2 2-3: 12.9 >3: 12.4 Example 1 Idaniel james nava ( born february 22 , 1983 ) is an american professional baseball outfielderJ nava is only the fourth player in mlb history to hit a grand slam in his first major league at bat and the second to do it on the first pitch . NN-1 he batted .302 with 73 steals , and received a september call - up to the major leagues Ias an outfielderJ . Iarthur ray briles ( born december 3 , 1955 ) is a former american football coach andJ his most recent head coaching position was at baylor university , a position he held from the 2008 season through the 2015 season . NN-2 he played Ias an outfielderJ for the hanshin tigers . Ijonathan david disalvatore ( born march 30 , 1981 ) is a professional ice hockeyJ he was selected by the san jose sharks in the 4th round ( 104th overall ) of the 2000 nhl entry draft . NN-3 in 2012 , his senior at oak mountain , dahl had a .412 batting average , 34 runs batted in ( rbis ) , and 18 stolen bases Ias an outfielder .J Iprofessor paul talalay ( born march 31 , 1923 ) is the john jacob abelJ distinguished service professor of pharmacology and director of the laboratory for molecular sciences at johns hopkins school of medicine in baltimore . Example 2 she , along with her follow artist carolyn mase studied with Iimpressionist landscape painterJ john henry twachtman at the art students league of new york . NN-1 his brother was draughtsman william daniell and his uncle was Ilandscape painterJ thomas daniell . the first painting was a portrait of a young girl , emerantia van beresteyn , the sister of Ithe landscape painterJ nicolaes van beresteyn , the later founder of half of this hofje . NN-2 william james linton ( december 7 , 1812 - december 29 , 1897 ) was an english - born american wood engraver , Ilandscape painter ,J political reformer and author of memoirs , novels , poetry and non-fiction . he was the club ’s top scorer with 22 goals in all competitions , one more than Isenegalese strikerJ lamine diarra , who left the club at the end of the season . NN-3 early on , hopper modeled his style after chase and french IimpressionistJ masters douard manet and edgar degas . caforio ” aggressively attacked ” his opponent , Irepublican incumbentJ steve knight , for his delayed response to the leak . Table 1: Natural language human evaluation results and 3 nearest neighbors. IInserted textJ marked. Example 1 neural editing model returns syntactically and semantically similar edits. Example 2 Neural edit representations are sensitive to position. Comparing the sequential edit encoders trained with Seq2Seq and Graph2Tree editors, we found that the edit encoder trained with the Graph2Tree objective performs better. We hypothesize that this is because the Graph2Tree editor better captures structural-level information about an edit. For instance, Example 1 in Tab. 3 removes explicit type casting. The Seq2Seq editor has difficulty distinguishing this type of edit, confusing it with changes of lambda expressions to method groups (1st and 2nd nearest neighbors) since both two types of edits involve removing paired parentheses. Surprisingly, we found that the graph-based edit encoder does not outperform the sequence-based encoder. However, we observe that the graph edit encoder in many cases tends to better capture high-level and abstract structural edit patterns. Example 2 in Tab. 3 showcases a seed edit that swaps two consecutive declarations, which corresponds to swapping the intermediate Expression nodes representing each statement on the underlying AST. In this case, the graph edit encoder is capable of grouping semantically similar edits, while it seems to be more difficult for the sequential encoder encoder to capture the edit pattern. On the other hand, we found that the graph edit encoder often fails to capture simpler, lexical level edits (e.g., Example 1). This might suggest that terminal node information is not effectively propagated, an interesting issue worth future investigation. 4.4 PRECISION OF NEURAL EDITORS Finally, we evaluate the performance of our end-to-end system by predicting the edited inputx+ given x− and the edit representation. We are interested in answering two research questions: First, how well can our neural editors generate x+ given the gold-standard edit representation f∆(x−,x+)? Second, and perhaps more interestingly, can we use the representation of a similar edit f∆(x′−,x ′ +) to generate x+ by applying that edit to x− (i.e. x̂+ = α(x−, f∆(x′−,x ′ +)))? To answer the first question, we trained our neural editor models on the WikiAtomicEdits and the GitHubEdits dataset, and evaluate the performance of encoding and applying edits on test sets. For completeness, we also evaluated the performance of our neural editor models with a simple “Bag-ofEdits” edit encoding scheme, where f∆(x−,x+) is modeled as the concatenation of two vectors, each representing the sum of the embeddings of added and deleted tokens in the edit, respectively. This edit encoding method is reminiscent of the model used in Guu et al. (2017) for solving a different task of language modeling by marginalizing over latent edits, which we will elaborate in Sect. 5. Tab. 4 lists the evaluation results. With our proposed sequence- and graph-based edit encoders, our neural editor models achieve reasonable end-to-end performance, surpassing systems using bag-of-edits representations. This is because many edits are context-sensitive and position-sensitive, requiring edit representation models that go beyond the bag-of-edits scheme to capture those effects (more analysis is included in Appendix B). Interestingly, on the GitHubEdits dataset, we find that the Seq2Seq editor with sequential edit encoder registers the best performance. However, it should be noted that in this set of experiments, we encode the gold-standard edit f∆(x−,x+) to predict x+. As we will show later, better performance with the gold-standard edit does not necessarily imply better (more generalizable) edit representation. Nevertheless, we hypothesize that the higher accuracy of the Seq2Seq edit is due to the fact that a significant proportion of edits in this dataset is small and primarily syntactically simple. Indeed we find that 69% of test examples have a token-level edit distance of less than 5. To answer the second question, we use the trained neural editors from the previous experiment, and test their performance in a “one-shot” transfer learning scenario. Specifically, we use our high-quality C#Fixers dataset, and for each fixer category F of semantically similar edits, we randomly select a seed edit {x′− → x′+} ∈ F , and use its edit representation f∆(x′−,x′+) to predict the updated code for all examples in F , i.e., we have x̂+ = α(x−, f∆(x′−,x′+)),∀ {x− → x+} ∈ F . This task is highly non-trivial, since a fixer category could contain more than hundreds of edit examples collected from different C# projects. Therefore, it requires the edit representations to generalize and transfer well, while being invariant of local lexical information like specific method names. To make the experimental evaluation more robust to noise, for each fixer category F , we randomly sample 10 seed edit pairs {x′− → x′+}, compute their edit representations and use them to predict the edited version of the examples in F and evaluate accuracy of predicting the exact final version. We then report the best score among the 10 seed representations as the performance metric on F . Tab. 5 summarizes the results and also reports the upper bound performance when using the goldstandard edit representation f∆(x−,x+) to predict x+, and an approximation of the “lower bound” accuracies using pre-trained Seq2Seq and Graph2Tree models without edit encoders. We found that our neural Graph2Tree editor with the sequential edit encoder significantly outperforms the Seq2Seq editor, even though Seq2Seq performs better when using gold-standard edit representations. This suggest that the edit representations learned with the Graph2Tree model generalize better, especially for edits discussed in Sect. 4.2 that involve syntactic variations like RCS1021 (lambda expression simplification, 7.8% vs. 30.7% for Seq2Seq and Graph2Tree), and RCS1207 (change lambdas to method groups, 7.1% vs. 26.2%). Interestingly, we also observe that Seq2Seq outperforms the Graph2Tree model for edits with trivial surface edit sequences, where the Graph2Tree model does not have a clear advantage. For example, on RCS1015 (use nameof operator, e.g. Exception("x")→ Exception(nameof(x))), the accuracies for Seq2Seq and Graph2Tree are 40.0% (14/35) and 28.6% (10/35), resp. We include more analysis of the results in Appendix C. 5 RELATED WORK Edits have recently been considered in NLP, as they represent interesting linguistic phenomena in language modeling and discourse (Faruqui et al., 2018; Yang et al., 2017a). Specifically, Guu et al. (2017) present a generative model of natural language sentences via editing prototypes. Our work shares with Guu et al. (2017) in that (1) the posterior edit encoding model in Guu et al. (2017) is similar to our baseline “bag-of-edits” encoder in Sec. 4.4, and (2) the sequence-to-sequence sentence generation model given the prototype and edit representation is reminiscent of our Seq2Seq editor. In contrast, our work directly focuses on discriminative learning of representing edits and applying the learned edits for both sequential (NL) and structured (code) data. Another similar line of research is “retrieve-and-edit” models for text generation (Hashimoto et al., 2018), where given an input data x, the target prediction y is generated by editing a similar target y′ that is retrieved based on the similarity between its source x′ and the input x. While these models typically require an “editor” component to generate the output by exploiting the difference between similar inputs, they usually use the simpler bag-of-edits representations (Wu et al., 2019), or implicitly capture it via end-to-end neural networks (Contractor et al., 2018). To our best knowledge, there is not any related work that classifies or otherwise explicitly represents the differences over similar input, with the exception of differential recurrent neural networks used for action recognition in videos (Veeriah et al., 2015; Zhuang et al., 2018). This is a substantially different task, as the data includes a temporal component as well. Source code edits are a widely studied artifact. Specialized software, such as git, is widely used to store source code revision histories. Nguyen et al. (2013) studied the repetitiveness of source code changes by identifying identical types of changes using a deterministic differencing tool. In contrast, we employ on a neural network to cluster similar changes together. Rolim et al. (2017) use such clusters to synthesize small programs that perform the edit. The approach is based on Rolim et al. (2018) extract manually designed syntactic features from code and cluster over multiple changes to find repeatable edit rules. Similarly, Paletov et al. (2018) extract syntactic features specifically targeting edits in cryptography API protocols. In this work, we try to avoid hand-designed features and allow a neural network to learn the relevant aspects of a change by directly giving as input the original and final version of a changed code snippet. Modeling tree generation with machine learning is an old problem that has been widely studied in NLP. Starting with Maddison & Tarlow (2014), code generation has also been considered as a tree generation problem. Close to our work is the decoder of Yin & Neubig (2017) which we use as the basis of our decoder. The work of Chen et al. (2018) is also related, since it provides a tree-to-tree model, but focuses on learning a single translation tasks and cannot be used directly to represent multiple types of edits. Both Yin & Neubig (2017) and Chen et al. (2018) have copying mechanism for single tokens, but our subtree copying mechanism is novel. Autoencoders (see Goodfellow et al. (2016) for an overview) have a long history in machine learning. Variational autoencoders (Kingma & Welling, 2013) are similar to autoencoders but instead of focusing on the learned representation, they aim to create accurate generative probabilistic models. Most (variational) autoencoders focus on encoding images but there have been works that autoencode sequences, such as text (Dai & Le, 2015; Bowman et al., 2015; Yang et al., 2017b) and graphs (Simonovsky & Komodakis, 2018; Liu et al., 2018). Conditional variational autoencoders (Sohn et al., 2015) have a related form to our model (with the exception of the KL term), but are studied as generative models, whereas we are primarily interested in the edit representation. 6 DISCUSSION & CONCLUSIONS In this work, we presented the problem of learning distributed representation of edits. We believe that the dataset of edits is highly relevant and should be studied in more detail. While we have presented a set of initial models and metrics on the problem and obtained some first promising results, further development in both of these areas is needed. We hope that our work inspires others to work on this interesting problem in the future. ACKNOWLEDGMENTS We would like to thank Rachel Free for her insightful comments and suggestions. A DATASETS AND CONFIGURATION WikiAtomicEdits We randomly sampled 1040K insertion examples from the English portion of WikiAtomicEdits (Faruqui et al., 2018) dataset, with a train, development and test splits of 1000K, 20K and 20K. GitHubEdits We cloned the top 54 C# GitHub repositories based on their popularity (Tab. 8). For each commit in the master branch, we collect the previous and updated versions of the source code, and extract all consecutive lines of edits that are smaller than three lines, and with at least three preceding and successive lines that have not been changed. We then filter trivial changes such as variable and identifier renaming, and changes happened within comments. We also limit the number of tokens for each edit to be smaller than 100, and down-sample edits whose frequency is larger than 30. Finally, we split the dataset by commit ids, ensuring that there are no edits in the training and testing (development) sets coming from the same commit. Tab. 6 lists some statistics of the dataset. C#Fixers We selected 16 C# fixers from Roslyn4 and Roslynator5, and ran them on 6 C# projects to generate a small, high-quality C# fixers dataset of 2 878 edit pairs with known semantics. Table 7 lists the detailed descriptions for each fixer category. And more information can be found at https://github.com/JosefPihrt/Roslynator/blob/master/ src/Analyzers/README.md. Network Configuration Throughout the experiments, we use a fixed edit representation size of 512. The dimensionality of word embedding, the hidden states of the encoder LSTMs, as well as the gated graph neural network is 128, while the decoder LSTM uses a larger hidden size of 256. For the graph-based edit encoder, we used a two-layer graph neural network, with 5 information propagation steps at each layer. During training, we performed early stopping, and choose the best model based on perplexity scores on development set. During testing, we decode a target element x+ using a beam size of 5. 4http://roslyn.io 5https://github.com/JosefPihrt/Roslynator B CLUSTERING EXPERIMENTS To qualitatively evaluate the quality of the learned edit representations. We use the models trained on the WikiAtomicEdits and GitHubEdits datasets to cluster natural language and code edits. We run K-Means clustering algorithm on 0.5 million sampled edits from WikiAtomicEdits, and all 90K code edits from GitHubEdits, producing 50 000 and 20 000 clusters for each dataset. Tab. 9 and Tab. 10 list some example clusters on WikiAtomicEdits and GitHub datasets, respectively. Due to the size of clusters, we omit out-liners and present distinctive examples from each cluster. On the WikiAtomicEdits dataset, we found clusters whose examples are semantically and syntactically similar. More interestingly, on the source code data, we find representative clusters that relate to idiomatic patterns and best practices of programming. The clustering results produced by our model would be useful for programming synthesis toolkits to generate interpretable code refractory rules, which we leave as interesting future work. Finally, we remark that the clustering results indicate that the encoding of edits is context-sensitive and position-sensitive for both natural language and source code data. For instance, the WikiAtomicEdits examples we present in Tab. 9 clearly indicate that semantically similar insertions also share similar editing positions. This is even more visible in code edits (Tab. 10). For instance, in the first example in Tab. 10, Equal() can be changed to Empty() only in the Assert namespace (i.e., the context). These examples demonstrate that it is important for an edit encoder to capture the contextual and positional information in edits, a property that cannot be captured by simple “bag-of-edits” edit representation methods. C BREAK-DOWN ANALYSIS OF TRANSFER LEARNING RESULTS D IMPACT OF TRAINING SET SIZE To evaluate the data efficiency of our proposed approach, we tested the end-to-end performance of our neural editor model (Sect. 4.4, Tab. 4) with varying amount of training data. Tab. 12 lists the results. We found both Graph2Tree and Seq2Seq editors are relatively data efficient. They registered around 90% of the accuracies achieved using the full training set with only 60% of the training data. E DETAILS OF HUMAN EVALUATION As discussed in Sect. 4.2, we performed human evaluation to rate the qualities of neighboring edits given a seed edit. The annotation instructions on GithubEdits and WikiAtomicEdits datasets are listed below. The annotation was carried out by three authors of this paper, and we anonymized the source of systems that generated the output. The three-way Fleiss’ kappa inter-rater agreement is κ = 0.55, which shows moderate agreement (Artstein & Poesio, 2008), an agreement level that is also used in other annotation tasks in NLP (Faruqui & Das, 2018). Rating 2 Semantically and Syntactically Equivalent The changed constituents in the seed edit and the neighboring edit are applied to the similar positions of the original sentence, serving the same syntactic and semantic role. For example, Examples • Seed Edit x− var V0 = V1.Where(V2 => V2.Name == LITERAL).Single(); x+ var V0 = V1.Single(V2=> V2.Name == LITERAL); • Neighbor x− var V0 = V1.GetMembers().Where(V2 => V2.Kind == SymbolKind.Property).Single(); x+ var V0 = V1.GetMembers().Single(V2 => V2.Kind == SymbolKind.Property); • Seed Edit x− Type V0 = V1 == null ? typeof(object) : V1.GetType(); x+ Type V0 = V1?.GetType() ?? typeof(object); • Neighbor x− string V0 = V1 == null ? string.Empty : VAR1.ToString(); x+ string V0 = V1?.ToString() ?? string.Empty; • Seed Edit x− Assert.True(Directory.Exists(V0) == V1); x+ Assert.Equal(Directory.Exists(V0), V1); • Neighbor x− Assert.True(V0.GetString(V0.GetBytes(LITERAL)) == V1.ContainingAssembly.Identity.CultureName); x+ Assert.Equal(V0.GetString(VAR0.GetBytes(LITERAL)), V1.ContainingAssembly.Identity.CultureName); Rating 1 Syntactically or Semantically Related The seed and neighboring edits share functionally or syntactically similar patterns. Examples The following edit is a related edit of the first example above, as it applies the same simplification (.Where(COND).Func() to .Func(COND)), but for FirstOrDefault instead of Single: • Seed Edit x− var V0 = V1.Where(V2 => V2.Name == LITERAL).Single(); x+ var V0 = V1.Single(V2=> V2.Name == LITERAL); • Neighbor x− var V0 = V1.Where(V2 => V3.ReportsTo == V2.EmployeeID).FirstOrDefault(); x+ var V0 = V1.FirstOrDefault(V2 => V3.ReportsTo == V2.EmployeeID); The following edit is a related edit of the second example above, as it also replaces a ternary expression for null checking with the ?. and ?? operators: • Seed Edit x− Type V0 = V1 == null ? typeof(object) : V1.GetType(); x+ Type V0 = V1?.GetType() ?? typeof(object); • Neighbor x− var V0 = V1 != null ? V1.ToList() : new List<TextSpan>(); x+ var V0 = V1?.ToList() ?? new List<TextSpan>(); We also considered pairs such as the following related, since they share similar syntactic structure • Seed Edit x− V0.State = V1; x+ V0.SetState(VAR1); • Neighbor x− V0.Quantity = V1; x+ V0.SetQuantity(V1); Rating 0 Not Related The seed and neighboring edits are not related based on the above criteria. Table 14: Annotation Instruction for WikiAtomEdits Data Rating 2 Semantically and Syntactically Equivalent The changed constituents in the seed edit and the neighboring edit are applied to the similar positions of the original sentence, serving the same syntactic and semantic role. For example, Seed Edit Neighbor chaz guest ( born I1961J) was born in niagra falls , . . . , a decorated hero in wwii in europe , including the purple heart . randal l. schwartz ( born november 22 , I1961J) , also known as merlyn , is an american author , system administrator and programming consultant. he was elected to donegal county council for sinn fin in 1979 , and held his seat until his death Iat age 56J . davis graduated from high school in january 1947 , immediately enrolling at wittenberg college in rural ohio Iat age 17J . IdrorJ feiler served as a paratrooper in the israel defense forces . InagaurJ fort - sandy fort ; centrally located ; 2nd century old ; witnessed many battles ; lofty walls & spacious campus ; having many palaces & temples inside . the original old bay house , home of the chief factor , still exists Iand is now part of the fort vermilion national historic siteJ . the population was 6,400 at the 2010 census Iand is part of the st. louis metropolitan areaJ . Rating 1 Syntactically Related The changed constituents in the seed and the neighboring edit are applied to the similar positions of the original sentence, and they play similar syntactic roles. This includes examples like adding a disfunction, adding a complement, prepositional clause or other syntactic constructs with similar phrases or language structures. For example, Seed Edit Neighbor the douro fully enters portuguese territory just after the confluence with the gueda river ; once the douro enters portugal , major population centres are less frequent Ialong the riverJ . she made a brief return to the screen in ” parrish ” ( 1961 ) , playing the supporting role of mother which received little attention Iby the pressJ . when they found it , they discovered a group of pagumon living there instead who immediately proceeded to treat the digidestined as honored guests I, saying that pagumon are the fresh form of koromonJ . in 2012 slote and his baseball book ” jake ” were the subject of an espn ( 30 for 30 ) short documentary in which slote describes his writing process and reads from the book I, saying it is his best writingJ . the aircraft was intended to be Icertified andJ supplied as a complete ready - to - fly - aircraft for the flight training and aerial work markets . in june reinforcements finally did arrive when Iprovincial andJ militia units from new york , new jersey , and new hampshire were sent up from fort edward by general daniel webb . Rating 0 Not Related The seed and neighboring edits are not related based on the above criteria.
1. What is the main contribution of the paper regarding text revisions and code changes? 2. What are the strengths of the proposed approach, particularly in the utilization of the bidirectional LSTM and gated graph neural network? 3. What are the weaknesses of the paper, especially in the evaluation process? 4. How does the reviewer assess the significance and motivation of the new task of predicting atomic edits? 5. What are the potential applications of accurately predicting atomic edits?
Review
Review This paper looks at learning to represent edits for text revisions and code changes. The main contributions are as follows: * They define a new task of representing and predicting textual and code changes * They make available a new dataset of code changes (text edit dataset was already available) with labels of the type of change * They try simple neural network models that show good performance in representing and predicting the changes The NLP community has recently defined the problem of predicting atomic edits for text data (Faraqui, et al. EMNLP 2018, cited in the paper), and that is the source of their Wikipedia revision dataset. Although it is an interesting problem, it is not immediately clear from the Introduction of this paper what would be enabled by accurate prediction of atomic edits (i.e. simple insertions and deletions), and I hope the next version would elaborate on the motivation and significance for this new task. The "Fixer" dataset that they created is interesting. Those edits supposedly make the code better, so modeling those edits could lead to "better" code. Having that as labeled data enables a clean and convincing evaluation task of predicting similar edits. The paper focuses on the novelty of the task and the dataset, so the models are simple variations of the existing bidirectional LSTM and the gated graph neural network. Because much of the input text (or code) does not change, the decoder gets to directly copy parts of the input. For code data, the AST is used instead of flat text of the code. These small changes seem reasonable and work well for this problem. Evaluation is not easy for this task. For the task of representing the edits, they show visualizations of the clusters of similar edits and conduct a human evaluation to see how similar these edits actually are. This human evaluation is not described in detail, as they do not say how many people rated the similarity, who they were (how they were recruited), how they were instructed, and what the inter-rater agreement was. The edit prediction evaluation is done well, but it is not clear what it means when they say better prediction performance does not necessarily mean it generalizes better. That may be true, but then without another metric for better generalization, one cannot say that better performance means worse generalization. Despite these minor issues, the paper contributes significantly novel task, dataset, and results. I believe it will lead to interesting future research in representing text and code changes.
ICLR
Title Learning to Represent Edits Abstract We introduce the problem of learning distributed representations of edits. By combining a “neural editor” with an “edit encoder”, our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem. 1 INTRODUCTION One great advantage of electronic storage of documents is the ease with which we can edit them, and edits are performed in a wide variety of contents. For example, right before a conference deadline, papers worldwide are finalized and polished, often involving common fixes for grammar, clarity and style. Would it be possible to automatically extract rules from these common edits? Similarly, program source code is constantly changed to implement new features, follow best practices and fix bugs. With the widespread deployment of (implicit) version control systems, these edits are quickly archived, creating a major data stream that we can learn from. In this work, we study the problem of learning distributed representations of edits. We only look at small edits with simple semantics that are more likely to appear often and do not consider larger edits; i.e., we consider “add definite articles” rather than “rewrite act 2, scene 3.” Concretely, we focus on two questions: i) Can we group semantically equivalent edits together, so that we can automatically recognize common edit patterns? ii) Can we automatically transfer edits from one context to another? A solution to the first question would yield a practical tool for copy editors and programmers alike, automatically identifying the most common changes. By leveraging tools from program synthesis, such groups of edits could be turned into interpretable rules and scripts (Rolim et al., 2017). When there is no simple hard rule explaining how to apply an edit, an answer to the second question would be of great use, e.g., to automatically rewrite natural language following some stylistic rule. We propose to handle edit data in an autoencoder-style framework, in which an “edit encoder” f∆ is trained to compute a representation of an edit x− → x+, and a “neural editor” α is trained to construct x+ from the edit representation and x−. This framework ensures that the edit representation is semantically meaningful, and a sufficiently strong neural editor allows this representation to not be specific to the changed element. We experiment with various neural architectures that can learn to represent and apply edits and hope to direct the attention of the research community to this new and interesting data source, leading to better datasets and stronger models. Briefly, the contributions of our paper are: (a) in Sect. 2, we present a new and important machine learning task on learning representations of edits (b) we present a family of ∗Work done as an intern in Microsoft Research, Cambridge, UK. models that capture the structure of edits and compute efficient representations in Sect. 3 (c) we create a new source code edit dataset, and release the data extraction code at https://github.com/Microsoft/msrc-dpu-learning-to-represent-edits and the data at http://www.cs.cmu.edu/˜pengchey/githubedits.zip. (d) we perform a set of experiments on the learned edit representations in Sect. 4 for natural language text and source code and present promising empirical evidence that our models succeed in capturing the semantics of edits. 2 TASK In this work, we are interested in learning to represent and apply edits on discrete sequential or structured data, such as text or source code parse trees1. Figure 1 gives a graphical overview of the task, described precisely below. Edit Representation Given a dataset of edits {x(i)− → x (i) + }Ni=1, where x (i) − is the original version of some object and x(i)+ its edited form (see upper half of Figure 1 for an example), our goal is to learn a representation function f∆ that maps an edit operation x− → x+ to a real-valued edit representation f∆(x−,x+) ∈ Rn. A desired quality of f∆ is for the computed edit representations to have the property that semantically similar edits have nearby representations in Rn. Having distributed representations also allows other interesting downstream tasks, e.g., unsupervised clustering and visualization of similar edits from large-scale data (e.g. the GitHub commit stream), which would be useful for developing human-assistance toolkits for discovering and extracting emerging edit patterns (e.g. new bug fixes or emerging “best practices” of coding). Neural Editor Given an edit representation function f∆, we want to learn to apply edits in a new context. This can be achieved by learning a neural editor α that accepts an edit representation f∆(x−,x+) and a new input x′− and generates x ′ +. 2 This is illustrated in the lower half of Figure 1. 3 MODEL We cast the edit representation problem as an autoencoding task, where we aim to minimize the reconstruction error of α for the edited version x+ given the edit representation f∆(x−,x+) and the original version x−. By limiting the capacity of f∆’s output and allowing the model to freely use information about x−, we are introducing a “bottleneck” that forces the overall framework to not simply treat f∆(x−,x+) as an encoder of x+. The main difference from traditional autoencoders is that in our setup, an optimal solution requires to re-use as much information as possible from x− 1Existing editing systems, e.g. the grammar checker in text editors and code refactoring module in IDEs, are powered by domain-specific, manually crafted rules, while we aim for a data-driven, domain-agnostic approach. 2We leave the problem of identifying which edit representation f∆(x−,x+) to apply to x′− as interesting future work. to make the most of the capacity of f∆. Formally, given a probabilistic editor function Pα such as a neural network and a dataset {x(i)− → x (i) + }Ni=1, we seek to minimize the negative likelihood loss L = − 1 N ∑ i logPα(x+ | x−, f∆(x−,x+)). Note that this loss function can be interpreted in two ways: (1) as a conditional autoencoder that encodes the salient information of an edit, given x− and (2) as an encoder-decoder model that encodes x− and decodes x+ conditioned on the edit representation f∆(x−,x+). In the rest of this section, we discuss our methods to model Pα and f∆ as neural networks. 3.1 NEURAL EDITOR As discussed above, α should use as much information as possible from x−, and hence, an encoderdecoder architecture with the ability to copy from the input is most appropriate. As we are primarily interested in edits on text and source code in this work, we explored two architectures: a sequenceto-sequence model for text, and a graph-to-tree model for source code, whose known semantics we can leverage both on the encoder as well as on the decoder side. Other classes of edits, for example, image manipulation, would most likely be better served by convolutional neural models. Sequence-to-Sequence Neural Editor First, we consider a standard sequence-to-sequence model with attention (over the tokens of x−). The architecture of our sequence-to-sequence model is similar to that of Luong et al. (2015), with the difference that we use a bidirectional LSTM in the encoder and a token-level copying mechanism (Vinyals et al., 2015) that directly copies tokens into the decoded sequence. Whereas in standard sequence-to-sequence models the decoder is initialized with the representation computed by the encoder, we initialize it with the concatenation of encoder output and the edit representation. We also feed the edit representation as input to the decoder LSTM at each decoding time step. This allows the LSTM decoder to take the edit representation into consideration while generating the output sequence. Graph-to-Tree Neural Editor Our second model aims to take advantage of the additional structure of x− and x+. To achieve this, we combine a graph-based encoder with a tree-based decoder. We use T (x) to denote a tree representation of an element, e.g., the abstract syntax tree (AST) of a fragment of source code. We extend T (x) into a graph form G(x) by encoding additional relationships (e.g., the “next token” relationship between terminal nodes, etc.) (see Figure 2(a)). To encode the elements of G(x−) into vector representations, we use a gated graph neural network (GGNN) (Li et al., 2015). Similarly to recurrent neural networks for sequences (such as biRNNs), GGNNs compute a representation for each node in the graph, which can be used in the attention mechanisms of a decoder. Additionally, we use them to obtain a representation of the full input x−, by computing their weighted average following the strategy of Gilmer et al. (2017) (i.e., computing a score for each node, normalizing scores with a softmax, and using the resulting values as weights). Our tree decoder follows the semantic parsing model of Yin & Neubig (2018), which sequentially generate a tree T (x+) as a series of expansion actions a1 . . . aN . The probability of taking an action is modeled as p(at | a<t, s), where s is the input (a sequence of words in the original semantic parsing setting) and a<t is the partial tree that has been generated so far. The model of Yin & Neubig (2018) mainly uses two types of actions: EXPANDR expands the current non-terminal using a grammar rule, and GENTERM generates a terminal token from a vocabulary or copies a token from s3. The dependence on the partial tree a<t is modeled by an LSTM cell which is used to maintain state throughout the generation procedure. Additionally, the LSTM receives the decoder state used to pick the action at the parent node as an additional input (“parent-feeding”). This process illustrated in Figure 2(b). We extend this model to our setting by replacing the input sequence s by x−; concretely, we condition the decoder on the graph-level representation computed for G(x−). Additionally, we use the change representation f∆(·) as an additional input to the LSTM initial state and at every decoding step. Based on the observation that edits to source code often manipulate the syntax tree by moving expressions around (e.g. by nesting statements in a conditional, or renaming a function while keeping its arguments), we extend the decoding model of Yin & Neubig (2018) by adding a facility to copy entire subtrees from the input. For this, we add a decoder action TREECP. This action is similar to standard copying mechanism known from pointer networks (Vinyals et al., 2015), but instead of copying only a single token, it copies the whole subtree pointed to. However, adding the TREECP action means that there are many correct generation sequences for a target tree. This problem appears in token-copying as well, but can be easily circumvented by marginalizing over all correct choices at each generation step (by normalizing the probability distribution over allowed actions to sum up those that have the same effect). In the subtree-copying setting, the lengths of action sequences representing different choices may differ. In our implementation we handle this problem during training by simpling picking the generation sequence that greedily selecting TREECP actions. 3.2 EDIT REPRESENTATION To compute a useful edit representation, a model needs to focus on the differences between x− and x+. A risk in our framework is that f∆ degenerates into an encoder for x+, turning α into a decoder. To avoid this, we need to follow the standard autoencoder trick, i.e. it is important to limit the capacity of the result of f∆ by generating the edit representation in a low-dimensional space RN . This acts as a bottleneck and encodes only the information that is needed to reconstruct x+ from x−. We again experimented with both sequence-based and graph-based representations of edits. Sequence Encoding of Edits Given x− (resp. x+) as sequence of tokens t (0) − , . . . t (T−) − (resp. t (0) + , . . . t (T+) + ), we can use a standard (deterministic) diffing algorithm to compute an alignment of tokens in the two sequences. We then use extra symbols ∅ for padding, + for additions, − for deletions,↔ for replacements, and= for unchanged tokens to generate a single sequence representing both x− and x+. This is illustrated in Figure 3(a). By embedding the three entries in each element of the sequence separately and concatenating their representation, they can be fed into a standard sequence encoder whose final state is our desired edit representation. In this work, we use a biLSTM. 3EXPANDR corresponds to the APPLYCONSTR action in the original model of Yin & Neubig (2018). There is also a REDUCE action which marks the end of expanding a non-terminal with non-deterministic number of child nodes. See Yin & Neubig (2018) for details. Graph Encoding of Edits As in the graph-to-tree neural editor, we represent x− and x+ as trees T (x−) and T (x+). We combine these trees into a graph representation G(x− → x+) by merging both trees into one graph, using “Removed”, “Added” and “Replaced” edges. To connect the two trees, we compute the same alignment as in the sequence case, connecting leaves that are the same and each replaced leaf to its replacement. We also propagate this information up in the trees, i.e., two inner nodes are connected by “=” edges if all their descendants are connected by “=” edges. This is illustrated in Figure 3(b). Finally, we also use the same “+” / “-” / “↔” / “=” tags for the initial node representation, computing it as the concatenation of the string label (i.e. token or nonterminal name) and the embedding of the tag. To obtain an edit representation, we use a GGNN unrolled for a fixed number of timesteps and again use the weighted averaging strategy of Gilmer et al. (2017). 4 EVALUATION Evaluating an unsupervised representation learning method is challenging, especially for a newly defined task. Here, we aim to evaluate the quality of the learned edit representations with a series of qualitative and quantitative metrics on natural language and source code. 4.1 DATASETS AND CONFIGURATION Natural Language Edits We use the WikiAtomicEdits (Faruqui et al., 2018) dataset of pairs of short edits on Wikipedia articles. We sampled 1040K edits from the English insertion portion of the dataset and split the samples into 1000K/20K/20K train-valid-test sets. Source Code Edits To obtain a dataset for source code, we clone a set of 54 C# projects on GitHub and collected a GitHubEdits dataset (see Appendix A for more information). We selected all changes in the projects that are no more than 3 lines long and whose surrounding 3 lines of code before and after the edited lines have not been changed, ensuring that the edits are separate and short. We then parsed the two versions of the source code and take as x− and x+ the code that belongs to the top-most AST node that contains the edited lines. Finally, we remove trivial changes such variable renaming, changes within comments or formatting changes. Overall, this yields 111 724 edit samples. For each edit we run a simple C# analysis to detect all variables and normalize variable names such that each unique variable within x− and x+ has a unique normalized name V0, V1, etc. This step is necessary to avoid the sparsity of data induced by the variety of different identifier naming schemes. We split the dataset into 91,372 / 10,176 / 10,176 samples as train/valid/test sets. Additionally, we introduce a labeled dataset of source code edits by using C# “fixers”. Fixers are small tools built on top of the C# compiler, used to perform common refactoring and modernization tasks (e.g., using new syntactic sugar). We selected 16 of these fixers and ran them on 6 C# projects to generate a small C#Fixers dataset of 2,878 edit pairs with known semantics. We present descriptions and examples of each fixer in Appendix A. Configuration Throughout the evaluation we use a fixed size of 512 for edit representations. The size of word embeddings and hidden states of encoding LSTMs is 128. The dimensionality of the decoding LSTM is set to 256. Details of model configuration can be found in Sect. A. When generating the target x+, our neural editor model can optionally take as input the context of the original input x− (e.g., the preceding and succeeding code segments surrounding x−), whose information could be useful for predicting x+. For example, in source code edits the updated code snippet x+ may reuse variables defined in the preceding snippet. In our code experiments, we use a standard bidirectional LSTM network to encode the tokenized 3 lines of code before and after x− as context. The encoded context is used to initialize the decoder, and as an additional source for the pointer network to copy tokens from. 4.2 QUALITY OF EDIT REPRESENTATIONS First, we study the ability of our models to encode edits in a semantically meaningful way. Visualizing Edits on Fixers Data In a first experiment, we train our sequential neural editor model on our GitHubEdits data and then compute representations for the edits generated by the C# fixers. A t-SNE visualization (Maaten & Hinton, 2008) of the encodings is shown in Figure 4. For this visualization, we randomly selected 100 examples from the edits of each fixer (if that fixer has more than 100 samples) and discarded fixer categories with less than 40 examples. Readers are referred to Appendix A for detailed descriptions of each fixer category. We find that our model produces dense clusters for simple or distinctive code edits, e.g. fixer RCS1089 (using the ++ or -- unary operators instead of a binary operator (e.g., i = i + 1 → i++), and fixer CA2007 (adding .ConfigureAwait(false) for await statements). We also analyzed cases where (1) the edit examples from the same fixer are scattered, or (2) the clusters of different fixers overlap with each other. For example, the fixer RCS1077 covers 12 different aspects of optimizing LINQ method calls (e.g., type casting, counting, etc.), and hence its edits are scattered. On the other hand, fixers RCS1146 and RCS1206 yield overlapping clusters, as both fixers change code to use the ?. operator. Fixers RCS1207 (change a lambda to a method group, e.g. foo(x=>bar(x)) → foo(bar)) and RCS1021 (simplify lambda expressions, e.g. foo(x=>{return 4;}) → foo(x=>4)) are similar, as both inline lambda expressions in two different ways. Analysis yields that the representation is highly dependent on surface tokens. For instance, IDE004 (removing redundant type casts, e.g. (int)2 → 2) and RCS1207 (removing explicit argument lists) yield overlapping clusters, as both involve deleting identifiers wrapped by parentheses. Human Evaluation on Encoding Natural Language WikiAtomicEdits In a second experiment, we test how well neighborhoods in edit representation space correspond to semantic similarity. We computed the five nearest neighbors of 200 randomly sampled seed edits from our training set, using both our trained sequence-to-sequence editing model with sequential edit encoder, as well as a simple bag-of-words baseline based on TF-IDF scores. We then rated the quality of the retrieved neighbors on a scale of 0 (“unrelated edit”), 1 (“similar edit”) and 2 (“semantically or syntactically same edit”). Details of the annotation schema is included in Sect. E. We show the (normalized) discounted cumulative gain (DCG, Manning et al. (2008)) for the two models at the top of Tab. 1 (higher is better). The relevance scores indicate that our neural model clearly outperforms the simplistic baseline. Tab. 1 also presents two example edits with their nearest neighbors. Example 1 shows that the neural edit models succeeded in representing syntactically and semantically similar edits, while the bag-of-words baseline relies purely on surface token overlap. Interestingly, we also observed that the edit representations learned by the neural editing model on WikiAtomicEdits are somewhat sensitive to position, i.e. the position of the inserted tokens in both the seed edit and the nearest neighbors is similar. This is illustrated in Example 2, where the second (“senegalese striker”) and the third (“republican incumbent”) nearest neighbors returned by the neural model have similar editing positions as the seed edit, while they are semantically diverse. 4.3 EDIT ENCODER PERFORMANCE To evaluate the performance of our two edit encoders discussed in Sect. 3.2 and disentangle it from the choice of neural editor, we train various combinations of our neural editor model and manually evaluate the quality of the edit representation. More specifically, we trained our neural editor models on GitHubEdits and randomly sampled 200 seed edits and computed their 3 nearest neighbors using each end-to-end model. We then rated the resulting groups using the same 0-2 scale as above. The resulting relevance scores are shown in Tab. 2. Bag of Words Model Seq2Seq – Seq Edit Encoder DCG/NDCG@5 9.3 / 67.3% 13.5 / 90.3% DCG@5 (by edit size) 1: 14.7 2-3: 10.8 >3: 5.4 1: 16.2 2-3: 12.9 >3: 12.4 Example 1 Idaniel james nava ( born february 22 , 1983 ) is an american professional baseball outfielderJ nava is only the fourth player in mlb history to hit a grand slam in his first major league at bat and the second to do it on the first pitch . NN-1 he batted .302 with 73 steals , and received a september call - up to the major leagues Ias an outfielderJ . Iarthur ray briles ( born december 3 , 1955 ) is a former american football coach andJ his most recent head coaching position was at baylor university , a position he held from the 2008 season through the 2015 season . NN-2 he played Ias an outfielderJ for the hanshin tigers . Ijonathan david disalvatore ( born march 30 , 1981 ) is a professional ice hockeyJ he was selected by the san jose sharks in the 4th round ( 104th overall ) of the 2000 nhl entry draft . NN-3 in 2012 , his senior at oak mountain , dahl had a .412 batting average , 34 runs batted in ( rbis ) , and 18 stolen bases Ias an outfielder .J Iprofessor paul talalay ( born march 31 , 1923 ) is the john jacob abelJ distinguished service professor of pharmacology and director of the laboratory for molecular sciences at johns hopkins school of medicine in baltimore . Example 2 she , along with her follow artist carolyn mase studied with Iimpressionist landscape painterJ john henry twachtman at the art students league of new york . NN-1 his brother was draughtsman william daniell and his uncle was Ilandscape painterJ thomas daniell . the first painting was a portrait of a young girl , emerantia van beresteyn , the sister of Ithe landscape painterJ nicolaes van beresteyn , the later founder of half of this hofje . NN-2 william james linton ( december 7 , 1812 - december 29 , 1897 ) was an english - born american wood engraver , Ilandscape painter ,J political reformer and author of memoirs , novels , poetry and non-fiction . he was the club ’s top scorer with 22 goals in all competitions , one more than Isenegalese strikerJ lamine diarra , who left the club at the end of the season . NN-3 early on , hopper modeled his style after chase and french IimpressionistJ masters douard manet and edgar degas . caforio ” aggressively attacked ” his opponent , Irepublican incumbentJ steve knight , for his delayed response to the leak . Table 1: Natural language human evaluation results and 3 nearest neighbors. IInserted textJ marked. Example 1 neural editing model returns syntactically and semantically similar edits. Example 2 Neural edit representations are sensitive to position. Comparing the sequential edit encoders trained with Seq2Seq and Graph2Tree editors, we found that the edit encoder trained with the Graph2Tree objective performs better. We hypothesize that this is because the Graph2Tree editor better captures structural-level information about an edit. For instance, Example 1 in Tab. 3 removes explicit type casting. The Seq2Seq editor has difficulty distinguishing this type of edit, confusing it with changes of lambda expressions to method groups (1st and 2nd nearest neighbors) since both two types of edits involve removing paired parentheses. Surprisingly, we found that the graph-based edit encoder does not outperform the sequence-based encoder. However, we observe that the graph edit encoder in many cases tends to better capture high-level and abstract structural edit patterns. Example 2 in Tab. 3 showcases a seed edit that swaps two consecutive declarations, which corresponds to swapping the intermediate Expression nodes representing each statement on the underlying AST. In this case, the graph edit encoder is capable of grouping semantically similar edits, while it seems to be more difficult for the sequential encoder encoder to capture the edit pattern. On the other hand, we found that the graph edit encoder often fails to capture simpler, lexical level edits (e.g., Example 1). This might suggest that terminal node information is not effectively propagated, an interesting issue worth future investigation. 4.4 PRECISION OF NEURAL EDITORS Finally, we evaluate the performance of our end-to-end system by predicting the edited inputx+ given x− and the edit representation. We are interested in answering two research questions: First, how well can our neural editors generate x+ given the gold-standard edit representation f∆(x−,x+)? Second, and perhaps more interestingly, can we use the representation of a similar edit f∆(x′−,x ′ +) to generate x+ by applying that edit to x− (i.e. x̂+ = α(x−, f∆(x′−,x ′ +)))? To answer the first question, we trained our neural editor models on the WikiAtomicEdits and the GitHubEdits dataset, and evaluate the performance of encoding and applying edits on test sets. For completeness, we also evaluated the performance of our neural editor models with a simple “Bag-ofEdits” edit encoding scheme, where f∆(x−,x+) is modeled as the concatenation of two vectors, each representing the sum of the embeddings of added and deleted tokens in the edit, respectively. This edit encoding method is reminiscent of the model used in Guu et al. (2017) for solving a different task of language modeling by marginalizing over latent edits, which we will elaborate in Sect. 5. Tab. 4 lists the evaluation results. With our proposed sequence- and graph-based edit encoders, our neural editor models achieve reasonable end-to-end performance, surpassing systems using bag-of-edits representations. This is because many edits are context-sensitive and position-sensitive, requiring edit representation models that go beyond the bag-of-edits scheme to capture those effects (more analysis is included in Appendix B). Interestingly, on the GitHubEdits dataset, we find that the Seq2Seq editor with sequential edit encoder registers the best performance. However, it should be noted that in this set of experiments, we encode the gold-standard edit f∆(x−,x+) to predict x+. As we will show later, better performance with the gold-standard edit does not necessarily imply better (more generalizable) edit representation. Nevertheless, we hypothesize that the higher accuracy of the Seq2Seq edit is due to the fact that a significant proportion of edits in this dataset is small and primarily syntactically simple. Indeed we find that 69% of test examples have a token-level edit distance of less than 5. To answer the second question, we use the trained neural editors from the previous experiment, and test their performance in a “one-shot” transfer learning scenario. Specifically, we use our high-quality C#Fixers dataset, and for each fixer category F of semantically similar edits, we randomly select a seed edit {x′− → x′+} ∈ F , and use its edit representation f∆(x′−,x′+) to predict the updated code for all examples in F , i.e., we have x̂+ = α(x−, f∆(x′−,x′+)),∀ {x− → x+} ∈ F . This task is highly non-trivial, since a fixer category could contain more than hundreds of edit examples collected from different C# projects. Therefore, it requires the edit representations to generalize and transfer well, while being invariant of local lexical information like specific method names. To make the experimental evaluation more robust to noise, for each fixer category F , we randomly sample 10 seed edit pairs {x′− → x′+}, compute their edit representations and use them to predict the edited version of the examples in F and evaluate accuracy of predicting the exact final version. We then report the best score among the 10 seed representations as the performance metric on F . Tab. 5 summarizes the results and also reports the upper bound performance when using the goldstandard edit representation f∆(x−,x+) to predict x+, and an approximation of the “lower bound” accuracies using pre-trained Seq2Seq and Graph2Tree models without edit encoders. We found that our neural Graph2Tree editor with the sequential edit encoder significantly outperforms the Seq2Seq editor, even though Seq2Seq performs better when using gold-standard edit representations. This suggest that the edit representations learned with the Graph2Tree model generalize better, especially for edits discussed in Sect. 4.2 that involve syntactic variations like RCS1021 (lambda expression simplification, 7.8% vs. 30.7% for Seq2Seq and Graph2Tree), and RCS1207 (change lambdas to method groups, 7.1% vs. 26.2%). Interestingly, we also observe that Seq2Seq outperforms the Graph2Tree model for edits with trivial surface edit sequences, where the Graph2Tree model does not have a clear advantage. For example, on RCS1015 (use nameof operator, e.g. Exception("x")→ Exception(nameof(x))), the accuracies for Seq2Seq and Graph2Tree are 40.0% (14/35) and 28.6% (10/35), resp. We include more analysis of the results in Appendix C. 5 RELATED WORK Edits have recently been considered in NLP, as they represent interesting linguistic phenomena in language modeling and discourse (Faruqui et al., 2018; Yang et al., 2017a). Specifically, Guu et al. (2017) present a generative model of natural language sentences via editing prototypes. Our work shares with Guu et al. (2017) in that (1) the posterior edit encoding model in Guu et al. (2017) is similar to our baseline “bag-of-edits” encoder in Sec. 4.4, and (2) the sequence-to-sequence sentence generation model given the prototype and edit representation is reminiscent of our Seq2Seq editor. In contrast, our work directly focuses on discriminative learning of representing edits and applying the learned edits for both sequential (NL) and structured (code) data. Another similar line of research is “retrieve-and-edit” models for text generation (Hashimoto et al., 2018), where given an input data x, the target prediction y is generated by editing a similar target y′ that is retrieved based on the similarity between its source x′ and the input x. While these models typically require an “editor” component to generate the output by exploiting the difference between similar inputs, they usually use the simpler bag-of-edits representations (Wu et al., 2019), or implicitly capture it via end-to-end neural networks (Contractor et al., 2018). To our best knowledge, there is not any related work that classifies or otherwise explicitly represents the differences over similar input, with the exception of differential recurrent neural networks used for action recognition in videos (Veeriah et al., 2015; Zhuang et al., 2018). This is a substantially different task, as the data includes a temporal component as well. Source code edits are a widely studied artifact. Specialized software, such as git, is widely used to store source code revision histories. Nguyen et al. (2013) studied the repetitiveness of source code changes by identifying identical types of changes using a deterministic differencing tool. In contrast, we employ on a neural network to cluster similar changes together. Rolim et al. (2017) use such clusters to synthesize small programs that perform the edit. The approach is based on Rolim et al. (2018) extract manually designed syntactic features from code and cluster over multiple changes to find repeatable edit rules. Similarly, Paletov et al. (2018) extract syntactic features specifically targeting edits in cryptography API protocols. In this work, we try to avoid hand-designed features and allow a neural network to learn the relevant aspects of a change by directly giving as input the original and final version of a changed code snippet. Modeling tree generation with machine learning is an old problem that has been widely studied in NLP. Starting with Maddison & Tarlow (2014), code generation has also been considered as a tree generation problem. Close to our work is the decoder of Yin & Neubig (2017) which we use as the basis of our decoder. The work of Chen et al. (2018) is also related, since it provides a tree-to-tree model, but focuses on learning a single translation tasks and cannot be used directly to represent multiple types of edits. Both Yin & Neubig (2017) and Chen et al. (2018) have copying mechanism for single tokens, but our subtree copying mechanism is novel. Autoencoders (see Goodfellow et al. (2016) for an overview) have a long history in machine learning. Variational autoencoders (Kingma & Welling, 2013) are similar to autoencoders but instead of focusing on the learned representation, they aim to create accurate generative probabilistic models. Most (variational) autoencoders focus on encoding images but there have been works that autoencode sequences, such as text (Dai & Le, 2015; Bowman et al., 2015; Yang et al., 2017b) and graphs (Simonovsky & Komodakis, 2018; Liu et al., 2018). Conditional variational autoencoders (Sohn et al., 2015) have a related form to our model (with the exception of the KL term), but are studied as generative models, whereas we are primarily interested in the edit representation. 6 DISCUSSION & CONCLUSIONS In this work, we presented the problem of learning distributed representation of edits. We believe that the dataset of edits is highly relevant and should be studied in more detail. While we have presented a set of initial models and metrics on the problem and obtained some first promising results, further development in both of these areas is needed. We hope that our work inspires others to work on this interesting problem in the future. ACKNOWLEDGMENTS We would like to thank Rachel Free for her insightful comments and suggestions. A DATASETS AND CONFIGURATION WikiAtomicEdits We randomly sampled 1040K insertion examples from the English portion of WikiAtomicEdits (Faruqui et al., 2018) dataset, with a train, development and test splits of 1000K, 20K and 20K. GitHubEdits We cloned the top 54 C# GitHub repositories based on their popularity (Tab. 8). For each commit in the master branch, we collect the previous and updated versions of the source code, and extract all consecutive lines of edits that are smaller than three lines, and with at least three preceding and successive lines that have not been changed. We then filter trivial changes such as variable and identifier renaming, and changes happened within comments. We also limit the number of tokens for each edit to be smaller than 100, and down-sample edits whose frequency is larger than 30. Finally, we split the dataset by commit ids, ensuring that there are no edits in the training and testing (development) sets coming from the same commit. Tab. 6 lists some statistics of the dataset. C#Fixers We selected 16 C# fixers from Roslyn4 and Roslynator5, and ran them on 6 C# projects to generate a small, high-quality C# fixers dataset of 2 878 edit pairs with known semantics. Table 7 lists the detailed descriptions for each fixer category. And more information can be found at https://github.com/JosefPihrt/Roslynator/blob/master/ src/Analyzers/README.md. Network Configuration Throughout the experiments, we use a fixed edit representation size of 512. The dimensionality of word embedding, the hidden states of the encoder LSTMs, as well as the gated graph neural network is 128, while the decoder LSTM uses a larger hidden size of 256. For the graph-based edit encoder, we used a two-layer graph neural network, with 5 information propagation steps at each layer. During training, we performed early stopping, and choose the best model based on perplexity scores on development set. During testing, we decode a target element x+ using a beam size of 5. 4http://roslyn.io 5https://github.com/JosefPihrt/Roslynator B CLUSTERING EXPERIMENTS To qualitatively evaluate the quality of the learned edit representations. We use the models trained on the WikiAtomicEdits and GitHubEdits datasets to cluster natural language and code edits. We run K-Means clustering algorithm on 0.5 million sampled edits from WikiAtomicEdits, and all 90K code edits from GitHubEdits, producing 50 000 and 20 000 clusters for each dataset. Tab. 9 and Tab. 10 list some example clusters on WikiAtomicEdits and GitHub datasets, respectively. Due to the size of clusters, we omit out-liners and present distinctive examples from each cluster. On the WikiAtomicEdits dataset, we found clusters whose examples are semantically and syntactically similar. More interestingly, on the source code data, we find representative clusters that relate to idiomatic patterns and best practices of programming. The clustering results produced by our model would be useful for programming synthesis toolkits to generate interpretable code refractory rules, which we leave as interesting future work. Finally, we remark that the clustering results indicate that the encoding of edits is context-sensitive and position-sensitive for both natural language and source code data. For instance, the WikiAtomicEdits examples we present in Tab. 9 clearly indicate that semantically similar insertions also share similar editing positions. This is even more visible in code edits (Tab. 10). For instance, in the first example in Tab. 10, Equal() can be changed to Empty() only in the Assert namespace (i.e., the context). These examples demonstrate that it is important for an edit encoder to capture the contextual and positional information in edits, a property that cannot be captured by simple “bag-of-edits” edit representation methods. C BREAK-DOWN ANALYSIS OF TRANSFER LEARNING RESULTS D IMPACT OF TRAINING SET SIZE To evaluate the data efficiency of our proposed approach, we tested the end-to-end performance of our neural editor model (Sect. 4.4, Tab. 4) with varying amount of training data. Tab. 12 lists the results. We found both Graph2Tree and Seq2Seq editors are relatively data efficient. They registered around 90% of the accuracies achieved using the full training set with only 60% of the training data. E DETAILS OF HUMAN EVALUATION As discussed in Sect. 4.2, we performed human evaluation to rate the qualities of neighboring edits given a seed edit. The annotation instructions on GithubEdits and WikiAtomicEdits datasets are listed below. The annotation was carried out by three authors of this paper, and we anonymized the source of systems that generated the output. The three-way Fleiss’ kappa inter-rater agreement is κ = 0.55, which shows moderate agreement (Artstein & Poesio, 2008), an agreement level that is also used in other annotation tasks in NLP (Faruqui & Das, 2018). Rating 2 Semantically and Syntactically Equivalent The changed constituents in the seed edit and the neighboring edit are applied to the similar positions of the original sentence, serving the same syntactic and semantic role. For example, Examples • Seed Edit x− var V0 = V1.Where(V2 => V2.Name == LITERAL).Single(); x+ var V0 = V1.Single(V2=> V2.Name == LITERAL); • Neighbor x− var V0 = V1.GetMembers().Where(V2 => V2.Kind == SymbolKind.Property).Single(); x+ var V0 = V1.GetMembers().Single(V2 => V2.Kind == SymbolKind.Property); • Seed Edit x− Type V0 = V1 == null ? typeof(object) : V1.GetType(); x+ Type V0 = V1?.GetType() ?? typeof(object); • Neighbor x− string V0 = V1 == null ? string.Empty : VAR1.ToString(); x+ string V0 = V1?.ToString() ?? string.Empty; • Seed Edit x− Assert.True(Directory.Exists(V0) == V1); x+ Assert.Equal(Directory.Exists(V0), V1); • Neighbor x− Assert.True(V0.GetString(V0.GetBytes(LITERAL)) == V1.ContainingAssembly.Identity.CultureName); x+ Assert.Equal(V0.GetString(VAR0.GetBytes(LITERAL)), V1.ContainingAssembly.Identity.CultureName); Rating 1 Syntactically or Semantically Related The seed and neighboring edits share functionally or syntactically similar patterns. Examples The following edit is a related edit of the first example above, as it applies the same simplification (.Where(COND).Func() to .Func(COND)), but for FirstOrDefault instead of Single: • Seed Edit x− var V0 = V1.Where(V2 => V2.Name == LITERAL).Single(); x+ var V0 = V1.Single(V2=> V2.Name == LITERAL); • Neighbor x− var V0 = V1.Where(V2 => V3.ReportsTo == V2.EmployeeID).FirstOrDefault(); x+ var V0 = V1.FirstOrDefault(V2 => V3.ReportsTo == V2.EmployeeID); The following edit is a related edit of the second example above, as it also replaces a ternary expression for null checking with the ?. and ?? operators: • Seed Edit x− Type V0 = V1 == null ? typeof(object) : V1.GetType(); x+ Type V0 = V1?.GetType() ?? typeof(object); • Neighbor x− var V0 = V1 != null ? V1.ToList() : new List<TextSpan>(); x+ var V0 = V1?.ToList() ?? new List<TextSpan>(); We also considered pairs such as the following related, since they share similar syntactic structure • Seed Edit x− V0.State = V1; x+ V0.SetState(VAR1); • Neighbor x− V0.Quantity = V1; x+ V0.SetQuantity(V1); Rating 0 Not Related The seed and neighboring edits are not related based on the above criteria. Table 14: Annotation Instruction for WikiAtomEdits Data Rating 2 Semantically and Syntactically Equivalent The changed constituents in the seed edit and the neighboring edit are applied to the similar positions of the original sentence, serving the same syntactic and semantic role. For example, Seed Edit Neighbor chaz guest ( born I1961J) was born in niagra falls , . . . , a decorated hero in wwii in europe , including the purple heart . randal l. schwartz ( born november 22 , I1961J) , also known as merlyn , is an american author , system administrator and programming consultant. he was elected to donegal county council for sinn fin in 1979 , and held his seat until his death Iat age 56J . davis graduated from high school in january 1947 , immediately enrolling at wittenberg college in rural ohio Iat age 17J . IdrorJ feiler served as a paratrooper in the israel defense forces . InagaurJ fort - sandy fort ; centrally located ; 2nd century old ; witnessed many battles ; lofty walls & spacious campus ; having many palaces & temples inside . the original old bay house , home of the chief factor , still exists Iand is now part of the fort vermilion national historic siteJ . the population was 6,400 at the 2010 census Iand is part of the st. louis metropolitan areaJ . Rating 1 Syntactically Related The changed constituents in the seed and the neighboring edit are applied to the similar positions of the original sentence, and they play similar syntactic roles. This includes examples like adding a disfunction, adding a complement, prepositional clause or other syntactic constructs with similar phrases or language structures. For example, Seed Edit Neighbor the douro fully enters portuguese territory just after the confluence with the gueda river ; once the douro enters portugal , major population centres are less frequent Ialong the riverJ . she made a brief return to the screen in ” parrish ” ( 1961 ) , playing the supporting role of mother which received little attention Iby the pressJ . when they found it , they discovered a group of pagumon living there instead who immediately proceeded to treat the digidestined as honored guests I, saying that pagumon are the fresh form of koromonJ . in 2012 slote and his baseball book ” jake ” were the subject of an espn ( 30 for 30 ) short documentary in which slote describes his writing process and reads from the book I, saying it is his best writingJ . the aircraft was intended to be Icertified andJ supplied as a complete ready - to - fly - aircraft for the flight training and aerial work markets . in june reinforcements finally did arrive when Iprovincial andJ militia units from new york , new jersey , and new hampshire were sent up from fort edward by general daniel webb . Rating 0 Not Related The seed and neighboring edits are not related based on the above criteria.
1. What are the main contributions and strengths of the paper regarding the edit encoder model and dataset? 2. How does the proposed approach compare to prior works such as Guu et al. 2017, specifically in terms of robustness and applicability to other tasks? 3. What are some potential limitations or areas for improvement in the proposed method, such as the access to full sequences x- and x+ or the use of a subsampled version of the WikiAtomicEdits corpus? 4. How do the authors evaluate the effectiveness of the edit encodings, both through human annotation and automatic metrics, and what are some potential issues or alternatives in these approaches? 5. Are there any further applications or implications of the research beyond NLP researchers and sequence- and graph-transduction models?
Review
Review The main contributions of the paper are an edit encoder model similar to (Guu et al. 2017 http://aclweb.org/anthology/Q18-1031), a new dataset of tree-structured source code edits, and thorough and well thought-out analysis of the edit encodings. The paper is clearly written, and provides clear support for each of their main claims. I think this would be of interest to NLP researchers and others working on sequence- and graph-transduction models, but I think the authors could have gone further to demonstrate the robustness of their edit encodings and their applicability to other tasks. This would also benefit greatly from a more direct comparison to Guu et al. 2017, which presents a very similar "neural editor" model. Some more specific points: - I really like the idea of transferring edits from one context to another. The one-shot experiment is well-designed, however it would benefit from also having a lower bound to get a better sense of how good the encodings are. - If I'm reading it correctly, the edit encoder has access to the full sequences x- and x+, in addition to the alignment symbols. I wonder if this hurts the quality of the representations, since it's possible (albeit not efficient) to memorize the output sequence x+ and decode it directly from the 512-dimensional vector. Have you explored more constrained versions of the edit encoder (such as the bag-of-edits from Guu et al. 2017) or alternate learning objectives to control for this? - The WikiAtomicEdits corpus has 13.7 million English insertions - why did you subsample this to only 1M? There is also a human-annotated subset of that you might use as evaluation data, similar to the C#Fixers set. - On the human evaluation: Who were the annotators? The categories "similar edit", and "semantically or syntactically same edit" seem to leave a lot to interpretation; were more specific instructions given? It also might be interesting, if possible, to separately classify syntactically similar and semantically similar edits. - On the automatic evaluation: accuracy seems brittle for evaluating sequence output. Did you consider reporting BLEU, ROUGE, or another "soft" sequence metric? - It would be worth citing existing literature on classification of Wikipedia edits, for example Yang et al. 2017 (https://www.cs.cmu.edu/~diyiy/docs/emnlp17.pdf). An interesting experiment would be to correlate your edit encodings with their taxonomy.
ICLR
Title Adversarially robust transfer learning Abstract Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of “fine tuning” a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness. 1 INTRODUCTION Deep neural networks achieve human-like accuracy on a range of tasks when sufficient training data and computing power is available. However, when large datasets are unavailable for training, or pracitioners require a low-cost training strategy, transfer learning methods are often used. This process starts with a source network (pre-trained on a task for which large datasets are available), which is then re-purposed to act on the target problem, usually with minimal re-training on a small dataset (Yosinski et al., 2014; Pan & Yang, 2009). While transfer learning greatly accelerates the training pipeline and reduces data requirements in the target domain, it does not address the important issue of model robustness. It is well-known that naturally trained models often completely fail under adversarial inputs (Biggio et al., 2013; Szegedy et al., 2013). As a result, researchers and practitioners often resort to adversarial training, in which adversarial examples are crafted on-the-fly during network training and injected into the training set. This process greatly exacerbates the problems that transfer learning seeks to avoid. The high cost of creating adversarial examples increases training time (often by an order of magnitude or more). Furthermore, robustness is known to suffer when training on a small dataset (Schmidt et al., 2018). To make things worse, high-capacity models are often needed to achieve good robustness (Madry et al., 2017; Kurakin et al., 2016; Shafahi et al., 2019b), but these models may over-fit badly on small datasets. CONTRIBUTIONS The purpose of this paper is to study the adversarial robustness of models produced by transfer learning. We begin by observing that robust networks contain robust feature extractors, which are resistant to adversarial perturbations in different domains. Such robust features can be used ∗equal contribution †University of Maryland ‡Cornell University as a basis for semi-supervised transfer learning, which only requires re-training the last layer of a network. To demonstrate the power of robust transfer learning, we transfer a robust ImageNet source model onto the CIFAR domain, achieving both high accuracy and robustness in the new domain without adversarial training. We use visualization methods to explore properties of robust feature extractors. Then, we consider the case of transfer of learning by “fine-tuning.” In this case, the source network is re-trained end-to-end using a small number of epochs on the target domain. Unfortunately, this end-to-end process does not always retain the robustness of the source domain; the network “forgets” the robust feature representations learned on the source task. To address this problem, we use recently proposed lifelong learning methods that prevent the network from forgetting the robustness it once learned. Using our proposed methods, we construct robust models that generalize well. In particular, we improve the generalization of a robust CIFAR-100 model by roughly 2% while preserving its robustness. 2 BACKGROUND Adversarial examples fall within the category of evasion attacks—test-time attacks in which a perturbation is added to a natural image before inference. Adversarial attacks are most often crafted using a differentiable loss function that measures the performance of a classifier on a chosen image. In the case of norm-constrained attacks (which form the basis of most standard benchmark problems), the adversary solves max δ l(x+ δ, y, θ) s.t. ‖δ‖p ≤ , (1) where θ are the (already trained and frozen) parameters of classifier c(x, θ)→ ŷ that maps an image to a class, l is the proxy loss used for classification (often cross-entropy), δ is the image perturbation, (x, y) is the natural image and its true class, and ||.||p is some `p-norm1. The optimization problem in Eq. 1 aims to find a bounded perturbation that maximizes the cross-entropy loss given the correct label. There are many variants of this process, including DeepFool (Moosavi-Dezfooli et al., 2016), L-BFGS (Szegedy et al., 2013), and CW (Carlini & Wagner, 2017). Many researchers have studied methods for building a robust network which have been later shown to be ineffective when attacked with stronger adversaries (Athalye et al., 2018). Adversarial training (Szegedy et al., 2013) is one of the defenses that was not broken by Athalye et al. (2018). While adversarial training using a weak adversary such as the FGSM attack (Goodfellow et al., 2015) can be broken even by single step attacks which add a simple random step prior to the FGSM step (Tramèr et al., 2017), adversarial training using a strong attack has successfully improved robustness. Madry et al. (2017) showed that a PGD attack (which is a BIM attack (Kurakin et al., 2016) with an initial random step and projection) is a strong enough attack to achieve promising adversarial training results. We will refer to this training method as PGD adversarial training. PGD adversarial training achieves good robustness on bounded attacks for MNIST (LeCun et al., 1998) and acceptable robustness on CIFAR-10 (Krizhevsky & Hinton, 2009) classifiers. Tsipras et al. (2018) show that adversarial training with strong PGD adversaries has many benefits in addition to robustness. They also state that while adversarial training may improve generalization in regimes where training data is limited (especially on MNIST), it may be at odds with generalization in regimes where data is available. This trade-off was also recently studied by Zhang et al. (2019), Su et al. (2018), and Shafahi et al. (2019a). While, to the best of our knowlegde, the transferability of robustness has not been studied in depth, Hendrycks et al. (2019) studied the case of adversarially training models that were pre-trained on different domains. Our work is fundamentally different in that we seek to transfer robustness without resorting to costly and data-hungry adversarial training. We train the target model on natural examples only, which allows us to directly study how well robustness transfers. Additionally, this allows us to have better generalization and achieve higher accuracy on validation examples. While as Hendrycks et al. (2019) state, fine-tuning on adversarial examples built for the target domain can improve robustness of relatively large datasets such as CIFAR-10 and CIFAR-100 compared to adversarial training from scratch on the target domain, we show that in the regimes of limited data (where transfer learning is more common), adversarially robust transfer learning can lead to better results measured in terms of both robustness and clean validation accuracy. 1By default we will use the `∞-norm in this paper. 3 THE ROBUSTNESS OF DEEP FEATURES In this section, we explore the robustness of different network layers, and demonstrate that robust networks rely on robust deep features. To do so, we start from robust classifiers (c(θr)) for the CIFAR-100 and CIFAR-10 datasets (Krizhevsky & Hinton, 2009), and update θ by training on natural examples. In each experiment, we re-initialize the last k layers/blocks of the network, and re-train just those layers. We start by re-initializing just the last layer, then the last two, and so on until we re-initialize all the layers. We use the adversarially trained Wide-ResNet 32-10 (Zagoruyko & Komodakis, 2016) for CIFAR10 from Madry et al. (2017) as our robust model for CIFAR-10. We also adversarially train our own robust classifier for CIFAR-100 using the code from Madry et al. (2017). To keep things consistent, we use the same hyper-parameters used by Madry et al. (2017) for adversarially training CIFAR-10 to adversarially train the CIFAR-100 model.2 The performance of the CIFAR-10 and CIFAR-100 models on natural and adversarial examples are summarized in Table 1. To measure robustness, we evaluate the models on adversarial examples built using PGD attacks. We break the WRN 32-10 model into 17 blocks, which are depicted in Fig. 2. In each experiment, we first re-initialize the k deepest blocks (blocks 1 through k) and then train the parameters of those blocks on natural images3. We train for 20,000 iterations using Momentum SGD and a learning rate of 0.001. We then incrementally unfreeze and train more blocks. For each experiment, we evaluate the newly trained model’s accuracy on validation adversarial examples built with a 20-step PGD `∞ attack with = 8. Fig. 1 shows that robustness does not drop if only the final layers of the networks are re-trained on natural examples. In fact, there is a slight increase in robustness compared to the baseline PGD7 adversarially trained models when we just retrain the last batch-normalization block and fully connected block. As we unfreeze and train more blocks, the network’s robustness suddenly drops. This leads us to believe that a hardened network’s robustness is mainly due to robust deep feature representations and robustness is preserved if we re-train on top of deep features. Now that we have identified feature extractors as a source of robustness, it is natural to investigate whether robustness is preserved when transfer learning using robust feature extractors. We will 2We adv. train the WRN 32-10 on CIFAR-100 using a 7-step `∞ PGD attack with step-size=2 and = 8. We train for 80,000 iterations with a batch-size of 128. 3In this experiment, we use standard data augmentation techniques. study two different approaches for transferring robustness across datasets: one in which only the last layer is re-trained, and one with end-to-end re-training. 4 TRANSFER LEARNING: RECYCLING FEATURE EXTRACTORS We study how robustness transfers when the feature extractor layers of the source network are frozen, and we retrain only the last fully connected layer (i.e. the classification layer) for the new task. Formally, the transfer learning objective is: min w l(z(x, θ∗), y, w) (2) where z is the deep feature extractor function with pre-trained and now “frozen” parameters θ∗, and w represents the trainable parameters of the last fully connected layer. To investigate how well robustness transfers, we use two source models: one that is hardened by adversarial training and another that is naturally trained. We use models trained on CIFAR-100 as source models and perform transfer learning from CIFAR100 to CIFAR-10. The results are summarized in Table 2. Compared to adversarial/natural training the target model, transferring from a source model seems to result in a drop in natural accuracy (compare first row of Table 1 to the first row of Table 2). This difference is wider when the source and target data distributions are dissimilar (Yosinski et al., 2014). To evaluate our method on two datasets with more similar attributes, we randomly partition CIFAR100 into two disjoint subsets where each subset contains images corresponding to 50 classes. Table 2 shows the accuracy of transferring from one of the disjoint sets to the other (second row) and to the same set (third row). We can compare results of transfer learning with adversarial training on CIFAR-100 by averaging the results in the second and third rows of Table 2 to get the accuracy across all 100 classes of CIFAR-100.4 By doing so, we see that the accuracy of the transferred classifier matches that of the adversarially trained one, even though no adversarial training took place in the target domain. For completeness, we have also included experiments where we use CIFAR-10 as the source and CIFAR-100 as the target domain. We make the following observations from the transfer-learning results in Table 2. 1) robustness transfers: when the source model used for transfer learning is robust, the target model is also robust (although less so than the source), 2) robustness transfers between models that are more similar: If 4The robust CIFAR-100 classifier has 59.87% validation accuracy and 22.76% accuracy on PGD-20 adversarial examples. The average validation accuracy of the two half-CIFAR-100 classifiers on validation examples is 64.96%+58.48% 2 = 61.72% while the average robustness is 25.16%+15.86% 2 = 20.51%. the source and target models are trained on datasets which have similar distributions (and number of classes), robustness transfers better, and 3) validation accuracy is worst if we use a robust model as the source compared to using a conventionally trained source model: if the source model is naturally trained, the natural validation accuracy is better, although the target model is then vulnerable to adversarial perturbations. 4.1 TRANSFER LEARNING WITH IMAGENET MODELS Transfer learning using models trained on ImageNet (Russakovsky et al., 2015) as the source is a common practice in industry because ImageNet feature extractors are powerful and expressive. In this section we evaluate how well robustness transfers from these models. 4.1.1 TRANSFER LEARNING USING IMAGENET Starting from both a natural and robust ImageNet model, we perform the same set of experiments we did in section 4. Robust ImageNet models do not withstand untargeted `∞ attacks using as large an as those that can be used for simpler datasets like CIFAR. Following the method Shafahi et al. (2019b), we “free train” a robust ResNet-50 on ImageNet using replay hyper-parameter m = 4. The hardened ImageNet classifier withstands attacks bounded by = 5. Our robust ImageNet achieves 59.05% top-1 accuracy and roughly 27% accuracy against PGD-20 `∞ = 5 attacks on validation examples. We experiment with using this robust ImageNet model and a conventionally trained ResNet-50 ImageNet model as the source models. Using the ImageNet source models, we train CIFAR classifiers by retraining the last layer on natural CIFAR examples. We up-sample the 32×32-dimensional CIFAR images to 224×224 before feeding them into the ResNet-50 source models that are trained on ImageNet. For evaluation purposes, we also train robust ResNet-50 models from scratch using (Shafahi et al., 2019b) for the CIFAR models. To ensure that the transfer learning models and the end-to-end trained robust models have the same capacity and dimensionality, we first upsample the CIFAR images before feeding them to the ResNet-50 model. To distinguish between the common case of training ResNet models on CIFAR images that are 32 × 32-dimensional, we call our models that are trained on the upsampled CIFAR datasets the upsample-first ResNets or “u-ResNets”. Table 3 illustrates that using a robust ImageNet model as a source results in high validation accuracy for the transferred CIFAR target models. Also, given that the ImageNet classifier by itself is 27% robust, the CIFAR-10 model maintains the majority of that 27% robustness. When we compare the end-to-end hardened classifiers (robust u-ResNets) with the transferred classifier, we can see that while the robustness is less for the transferred case, transferred models result in considerably better performance on clean validation examples. 4.2 LOW-DATA REGIME As touched on before, transfer learning is more common in situations where the number of training points in the target domain is limited. Up until now, as a proof of concept, we have illustrated the majority of our experiments on the CIFAR target domains where we have many training points perclass. Hendrycks et al. (2019) show that starting from a pre-trained robust ImageNet model and fine-tuning on adversarial examples of the CIFAR domain can improve robustness beyond that of simply adversarial training CIFAR. Here, we illustrate the effect of training data size on robustness and natural performance by running various experiments on subsets of CIFAR-100 where we vary the number of training points per-class (N ). We compare three different hardening methods: (1) Free-training/adversarial training the target domain (Shafahi et al., 2019b); (2) fine-tuning using adversarial examples of the target task starting from the Free-4 robust ImageNet model similar to (Hendrycks et al., 2019); and (3) training a fully connected layer on top of the frozen feature extractors of the Free-4 robust ImageNet model using natural examples from the target task. For comparing the three different approaches, we look at three metrics: (a) clean validation accuracy; (b) robustness against PGD-20 validation adversarial examples; and (c) average of robustness and clean performance (((a)+(b))/2.) The results are summarized in Fig. 3. In the regimes where transfer learning is more common, adversarially robust transfer learning results in the best overall performance. Adversarially/Free training the target domain results in less robustness and validation accuracy compared to fine-tuning which highlights the importance of pre-training (Hendrycks et al., 2019). Note that in terms of computational resources required, the cost of fine-tuning on adversarial examples of the target domain is about k× our method since it requires generation of adversarial examples using k-step PGD attacks (we set k = 3). 4.2.1 TRAINING DEEPER NETWORKS ON TOP OF ROBUST FEATURE EXTRACTORS The basic transfer learning setting of section 4.1.1 only re-trains one layer for the new task. In section 4.1.1, when we transferred from the robust ImageNet to CIFAR-100, the natural training accuracy was 88.84%. Given the small number of trainable parameters left for the network (≈ 2048 × 100) and the fixed feature extractor, the network was not capable of completely fitting the training data. This means that there is potential to improve natural accuracy by learning more complex non-linear features and increasing the number of trainable parameters. To increase representation capacity and the number of trainable parameters, instead of training a 1- layer network on top of the feature extractor, we train a multi-layer perceptron (MLP) network on top of the robust feature extractor. To keep things simple and prevent bottle-necking, every hidden layer we add has 2048 neurons. We plot the training and validation accuracies on the natural examples and the robustness (i.e. PGD-20 validation accuracy) in Fig. 4 for various numbers of hidden layers. As can be seen, adding one layer is enough to achieve 100% training accuracy. However, doing so does not result in an increase in validation accuracy. To the contrary, adding more layers can result in a slight drop in validation accuracy due to overfitting. As illustrated, we can improve generalization using simple but effective methods such as dropout (Srivastava et al., 2014) (with probability 0.25) and batch-normalization (Ioffe & Szegedy, 2015). However, the most interesting behavior we observe in this experiment is that, as we increase the number of hidden layers, the robustness to PGD-20 attacks improves. Note, this seems to happen even when we transfer from a naturally trained ImageNet model. While for the case where we have no hidden layers, robustness is 0.00% on CIFAR100 when we use a naturally trained ImageNet model as source, if our MLP has 1, 2, 3, or 5 hidden layers, our robustness against PGD attacks would be 0.03%, 0.09%, 0.31% and 6.61%, respectively. This leads us to suspect that this behavior may be an artifact of vanishing gradients for adversary as the softmax loss saturates when the data is fit perfectly (Athalye et al., 2018). Therefore, for this case we change our robustness measure and use the CW attack (Carlini & Wagner, 2017) which will encounter fewer numerical issues because its loss function does not have a softmax component and does not saturate. Attacking the model from the natural source with CW-20 completely breaks the model and achieves 0.00% robustness. Most interestingly, attacking the model transferred from a robust source using the CW objective maintains robustness even when the number of hidden layers increases. 5 ANALYSIS: ROBUST FEATURE EXTRACTORS ARE FILTERS Our experiments suggest that the robustness of neural networks arises in large part from the presence of robust feature extractors. We have used this observation to transfer both robustness and accuracy between domains using transfer-learning. However, we have not yet fully delved into what it means to have a robust feature extractor. Through visualizations, Tsipras et al. (2018) studied how adversarial training causes the image gradients of neural networks to exhibit meaningful generative behavior. In other words, adversarial perturbations on hardened networks “look like” the class into which the image is perturbed. Given that optimization-based attacks build adversarial examples using the image gradient, we also visualize the image gradients of our transferred models to see if they exhibit the same generative behavior as adversarially trained nets. Fig. 5 plots the gradient of the loss w.r.t. the input image for models obtained by re-training only the last layer, and also for the case where we train MLPs on top of a robust feature extractor. The gradients for the transfer-learned models with a robust source are interpretable and “look like” the adversarial object class, while the gradients of models transferred from a natural source do not. This interpretatbility comes despite the fact that the source model was hardened against attacks on one dataset, and the transferred model is being tested on object classes from another. Also, we see that adding more layers on top of the feature extractor, which often leads to over-fitting, does not make gradients less interpretable. This latter observation is consistent with our observation that added layers preserve robustness(Fig. 4). These observations, together with the success of robust transfer learning, leads us to speculate that a robust model’s feature extractors act as a “filter” that ignores irrelevant parts of the image. Figure 5: Gradients of the loss w.r.t to input images for the CIFAR-100 transfer learning experiments of sections 4.1.1 & 4.2.1. The top row contains sample CIFAR-100 images. Other rows contain image gradients of the model loss. The second row is for a model transferred from a naturally trained ImageNet source. Rows 3-5 are for models transferred from a robust ImageNet source. These rows correspond to an MLP with 0 (row 3), 1 (row 4), and 2 (row 5) hidden layers on top of the robust feature extractor. The gradients in the last three rows all show interpretable generative behavior. 6 END-TO-END TRAINING WITHOUT FORGETTING As discussed in section 4, transfer learning can preserve robustness of the robust source model. However, it comes at the cost of decreased validation accuracy on natural examples compared to the case where we use a naturally trained source model. Consequently, there seems to be a trade-off between generalization and robustness based on the choice of the source model. For any given classifer, the trade-off between generalization and robustness is the subject of recent research (Tsipras et al., 2018; Zhang et al., 2019; Shafahi et al., 2019a). In this section, we intend to improve the overall performance of classifiers transferred from a robust source model by improving their generalization on natural images. To do so, unlike previous sections where we froze the feature extractor mainly to preserve robustness, we fine tune the feature extractor parameters θ. Ideally, we should learn to perform well on the target dataset without catastrophically forgetting the robustness of the source model. To achieve this, we utilize lifelong learning methods. Learning without Forgetting (LwF) (Li & Hoiem, 2018) is a method for overcoming catastrophic forgetting. The method is based on distillation. In this framework, we train the target model with a loss that includes a distillation term from the previous model. min w,θ l(z(x, θ), y, w) + λd · d(z(x, θ), z0(x, θ∗r)) (3) where, in our method, λd is the feature representation similarity penalty, and d is some distance metric between the robust model’s feature representations z0(x, θ∗r) and the current model’s feature representations z(x, θ). Unlike the original LwF paper that used a distilled loss from Hinton et al. (2015) and applies distillation to the logits, we simply choose d to be the `2-norm and apply distillation to the penultimate layer5. Our loss is designed to make the feature representations of the source and target network similar, thus preserving the robust feature representations (Fig. 6). Ideally, z(x, θ) ≈ z(x, θ∗r). To speed up training, given robust feature extractor parameters θ∗r , we store z0(x, θ∗r) for the images of the target task and load this from memory (i.e. offline) instead of performing a forward pass through the robust source network online. Therefore, in the experiments related to LwF, we do not train with data augmentation because we have not pre-computed z(xa, θ ∗ r), where xa is the augmented image. Empirically we verified that d(z(x, θ ∗ r), z(xa, θ ∗ r)) was not negligible6. To improve performance, we follow a warm-start scheme and only train the fully connected parameters w early in training. We then cut the learning rate and continue fine tuning both feature extractors (θ) and w. In our experiments, we use a learning rate of 0.001, and the warm-start makes up half of the total training iterations. Starting from the pre-trained source model, we train for a total of 20,000 iterations with batch-size 128. The results with an adversarially trained CIFAR-100 model as source and CIFAR-10 as target are summarized in Table 4.7 As can be seen, having a LwF-type regularizer helps in maintaining robustness and also results in a considerable increase in validation accuracy. The trade-off between robustness and generalization can be controlled by the choice of λd. It seems that for some choices of λd such as 0.1, robustness also increases. However, in hindsight, the increase in accuracy on PGD-20 adversarial examples is not solely due to improvement in robustness. It is due to the fact that the validation accuracy has increased and we have a better classifier overall. For easier comparisons, we have provided the transfer results without LwF at the bottom of Table 4. Note that using LwF, we can keep the robustness of the source model and also achieve clean validation accuracy comparable to a model that uses naturally trained feature extractors. In the supplementary, we show that similar conclusions can be drawn for the split CIFAR-100 task. We demonstrated in our transfer experiments that using our LwF-type loss, can help decrease the generalization gap while preserving robustness. In this section, we assume that the source domain is the adversarial example domain of a dataset and the target domain is the clean example domain of the same dataset. This experiment can be seen as applying transfer learning from the adversarial example domain to the natural example domain while preventing forgetting the adversarial domain. In the case where the source and target datasets are the same (Transferring from a robust CIFAR-100 model to CIFAR-100), by applying our LwF-type loss, we can improve the generalization of robust models. Our results are summarized in Table 5. 5We do so since in section 3 we found the source of robustness to be the feature extractors and this observation was later reinforced due to the empirical results in section 4 6The analysis is in the supplementary. 7Source code for LwF-based experiments: https://github.com/ashafahi/RobustTransferLWF 7 CONCLUSION We identified the feature extractors of adversarially trained models as a source of robustness, and use this observation to transfer robustness to new problems domains without adversarial training. While transferring from a natural model can achieve higher validation accuracy in comparison to transferring from a robust model, we can close the gap and maintain the initial transferred robustness by borrowing ideas from the lifelong learning literature. The success of this methods suggests that a robust feature extractor is effectively a filter that sifts out relevant components of an image that are needed to assign class labels. We hope that the insights from this study enable practitioners to build robust models in situations with limited labeled training data, or when the cost and complexity of adversarial training from scratch is untenable. Acknowledgements: Goldstein and his students were supported by the DARPA QED for RML program, the DARPA GARD program, and the National Science Foundation. A EXPERIMENT DETAILS A.1 LWF-BASED EXPERIMENTS In our LWF-based experiments, we use a batch-size of 128, a fixed learning-rate of 1e-2m, and fine-tune for an additional 20,000 iterations. The first 10,000 iterations are used for warm-start; during which we only update the final fully connected layer’s weights. During the remaining 10,000 iterations, we update all of the weights but do not update the batch-normalization parameters. A.2 IMAGENET TO CIFAR EXPERIMENTS When freezing the feature extractor and fine-tuning on adversarial examples, we train the last fully connected layer’s weights for 50 epochs using batch-size=128. We start with an initial learning rate of 0.01 and drop the learning rate to 0.001 at epoch 30. In the case of fine-tuning on adversarial examples, we generate the adversarial examples using a 3 step PGD attack with step-size 3 and a perturbation bound = 5. A.3 FREE TRAINING EXPERIMENTS In all of our free-training experiments where we train the u-ResNet-50, we train for 90 epochs using a batch-size of 128. The initial learning rate used is 0.1 and we drop it by a factor of 10 at epochs 30 and 60. We use a replay parameter m = 4 and perturbation bound = 5. B THE DISTANCE BETWEEN FEATURE REPRESENTATIONS OF NATURAL IMAGES AND AUGMENTED IMAGES To speed up the LwF experiments, we did not use data augmentation during training. Instead of computing the robust feature representations on the fly, before starting training on the new target task, we passed the entire training data of the target task through the robust network and stored the feature representation vector. If we were doing data augmentation, we would have to pass the entire augmented training data through the network, which would be slow and memory intensive. Alternatively, we could use the robust feature representation of the non-augmented images instead. The latter would have been feasible if the distance between the robust feature representations of the non-augmented and augmented images were very small. However, as shown in fig 7, this quantity is not often negligible. C LOW-DATA REGIME TRANSFER LEARNING FROM IMAGENET TO CIFAR-10 In section 4.2, we illustrated that robust transfer learning is most beneficial where we have limited data (i.e., limited number of training data per class). We used CIFAR-100 as an illustrative target dataset. However, the overall results are not data set dependent. When we have limited number of training data, robust transfer learning results in the highest overall performance (see Fig. 8 for transferring from a robust ImageNet model to smaller instances of CIFAR-10). D LWF-BASED ROBUST TRANSFER LEARNING FOR SIMILAR SOURCE AND TARGET DATASETS In Table 6 we conduct LwF experiments on the split CIFAR-100 task which is more suited for transfer learning due to the similarities between the source and target datasets. In these situations, the LwF regularizer on the feature representations still works and can improve generalization without becoming vulnerable to adversarial examples. If we take the average performance of the robust classifiers on the split tasks (average of robust half CIFAR-100 and the LwF setting model for λd = 0.01) we get (63.32 + 64.96)/2 = 64.14% average validation accuracy and 20.42% average robustness which is comparable with the case that we had adversarially trained the entire CIFAR-100 dataset (Table 1). E IMPROVING GENERALIZATION OF THE CIFAR-10 ADVERSARIALLY TRAINED MODEL Similar to the case of improving the generalization for CIFAR-100, we use our LwF-based loss function to transfer from the robust CIFAR-10 domain to the natural CIFAR-10 domain. We summarize the results in Table 7.
1. What is the main contribution of the paper regarding transfer learning using standard training? 2. What are the strengths of the paper, particularly in its exploration of robust models and representations? 3. Do you have any questions or concerns about the paper's experiments and results? 4. How does the reviewer assess the significance and relevance of the paper's findings to the machine learning community? 5. Are there any suggestions for improving the paper's content or experimental design?
Review
Review Paper summary: This paper explores the problem of robustly transfer learning using only standard training (as opposed to adversarial training (AT)) on the target domain. The authors start by highlighting that intermediate representations learned by adversarially trained networks are themselves fairly robust. Then they propose two strategies for robust transfer from a robust model trained on the source domain: (1) naturally fine-tuning the final linear layer on the target domain and (2) naturally fine-tuning all the layers using lifelong learning strategies. They study transfer between CIFAR10 and CIFAR100, as well as, from ImageNet to CIFAR10/100. High-level comments: Overall, I find the paper interesting and well-written. Prior work from Hendrycks et al. showed that AT on the source domain followed by *adversarial* fine-tuning on the target domain attains better performance as compared to just AT on the target domain. The main contribution of this paper is to show that using instead careful *natural* fine-tuning on the target domain is sufficient to recover a reasonable amount of this robustness. Even though the clean/robust accuracy of the proposed approach is lower than just doing adversarial training/prior work from Hendrycks et al., I feel this paper could be useful to the community for two main reasons: 1. The authors perform a nice exploration of the thesis that robust models have robust representations, and how this connects to transfer learning. In particular, the experiments in Figure 1 (the effect of naturally re-training later layers of a robust network on its robustness) and Figure 5 (where the authors show that their naturally fine-tuned models have some of the unexpected benefits from Tsipras et al.) seem particularly interesting. 2. Despite its lower accuracy, this approach could be useful in settings where data is scarce or compute is expensive, and hence adversarial training on the target domain is not successful. Specific comments/questions: i. Could the authors clarify what they mean by point 3 (re: validation accuracy drop) below Table 3? As far as I can tell, the drop in clean validation accuracy between Table 1 and Table 2 are similar for both the naturally and adversarially pre-trained models. ii. In Table 2, it would also be interesting to see the performance when the source domain is CIFAR10 and the target domain is CIFAR100. iii. For Table 2 is the eps=8? This should be mentioned in the caption as it is important to highlight that Tables 2 and 3 are not directly comparable. iv. Why isn’t experiment in Figure 3 should be repeated for CIFAR10 as well? The authors should add this result to the paper, even if in the appendix. It is important to verify that this trend is not specific to CIFAR100 and holds across datasets (even though CIFAR10/100 are not too different). v. The comment at the end of page 6 re: natural model is confusing (“Note, this seems to...perfectly”)---as far as I can tell, Figure 4 does not include the results of fine-tuning a naturally trained model. vi. General comment motivated by the comment ("Note...perfectly") mentioned 5 above: For all the adversarial evaluation in the paper, the authors should also try CW attacks/black-box attacks to get a more confident estimate of their model’s robustness. vii. The authors reference prior work on the tradeoff between robustness and accuracy and motivate Section 6 as an avenue to alleviate this trade-off for their model. However, I don’t see the lower performance of their model as an instance of this trade-off---the model in the paper performs worse in terms of both clean and adversarial accuracy. The approach proposed in Section 6 seems interesting, but more as an approach to improve the *overall* performance of the model. The authors mention this in retrospect, but I think the narrative of this section should be modified to make this clearer. viii. In Table 5, in the experiments corresponding to CIFAR100+ -> CIFAR100, is the dataset split into two halves (for the source and target domains) or is the fine-tuning performed on the same data. In general, I find it odd that natural fine-tuning on the *same data* can improve both the clean and adversarial accuracy of the model (compared to the CIFAR100+ robust baseline). Is the robust model trained long enough/with enough hyperparameter search? Overall, the exploration in the paper seems novel and could be useful to the community. Thus, I recommend acceptance.
ICLR
Title Adversarially robust transfer learning Abstract Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of “fine tuning” a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness. 1 INTRODUCTION Deep neural networks achieve human-like accuracy on a range of tasks when sufficient training data and computing power is available. However, when large datasets are unavailable for training, or pracitioners require a low-cost training strategy, transfer learning methods are often used. This process starts with a source network (pre-trained on a task for which large datasets are available), which is then re-purposed to act on the target problem, usually with minimal re-training on a small dataset (Yosinski et al., 2014; Pan & Yang, 2009). While transfer learning greatly accelerates the training pipeline and reduces data requirements in the target domain, it does not address the important issue of model robustness. It is well-known that naturally trained models often completely fail under adversarial inputs (Biggio et al., 2013; Szegedy et al., 2013). As a result, researchers and practitioners often resort to adversarial training, in which adversarial examples are crafted on-the-fly during network training and injected into the training set. This process greatly exacerbates the problems that transfer learning seeks to avoid. The high cost of creating adversarial examples increases training time (often by an order of magnitude or more). Furthermore, robustness is known to suffer when training on a small dataset (Schmidt et al., 2018). To make things worse, high-capacity models are often needed to achieve good robustness (Madry et al., 2017; Kurakin et al., 2016; Shafahi et al., 2019b), but these models may over-fit badly on small datasets. CONTRIBUTIONS The purpose of this paper is to study the adversarial robustness of models produced by transfer learning. We begin by observing that robust networks contain robust feature extractors, which are resistant to adversarial perturbations in different domains. Such robust features can be used ∗equal contribution †University of Maryland ‡Cornell University as a basis for semi-supervised transfer learning, which only requires re-training the last layer of a network. To demonstrate the power of robust transfer learning, we transfer a robust ImageNet source model onto the CIFAR domain, achieving both high accuracy and robustness in the new domain without adversarial training. We use visualization methods to explore properties of robust feature extractors. Then, we consider the case of transfer of learning by “fine-tuning.” In this case, the source network is re-trained end-to-end using a small number of epochs on the target domain. Unfortunately, this end-to-end process does not always retain the robustness of the source domain; the network “forgets” the robust feature representations learned on the source task. To address this problem, we use recently proposed lifelong learning methods that prevent the network from forgetting the robustness it once learned. Using our proposed methods, we construct robust models that generalize well. In particular, we improve the generalization of a robust CIFAR-100 model by roughly 2% while preserving its robustness. 2 BACKGROUND Adversarial examples fall within the category of evasion attacks—test-time attacks in which a perturbation is added to a natural image before inference. Adversarial attacks are most often crafted using a differentiable loss function that measures the performance of a classifier on a chosen image. In the case of norm-constrained attacks (which form the basis of most standard benchmark problems), the adversary solves max δ l(x+ δ, y, θ) s.t. ‖δ‖p ≤ , (1) where θ are the (already trained and frozen) parameters of classifier c(x, θ)→ ŷ that maps an image to a class, l is the proxy loss used for classification (often cross-entropy), δ is the image perturbation, (x, y) is the natural image and its true class, and ||.||p is some `p-norm1. The optimization problem in Eq. 1 aims to find a bounded perturbation that maximizes the cross-entropy loss given the correct label. There are many variants of this process, including DeepFool (Moosavi-Dezfooli et al., 2016), L-BFGS (Szegedy et al., 2013), and CW (Carlini & Wagner, 2017). Many researchers have studied methods for building a robust network which have been later shown to be ineffective when attacked with stronger adversaries (Athalye et al., 2018). Adversarial training (Szegedy et al., 2013) is one of the defenses that was not broken by Athalye et al. (2018). While adversarial training using a weak adversary such as the FGSM attack (Goodfellow et al., 2015) can be broken even by single step attacks which add a simple random step prior to the FGSM step (Tramèr et al., 2017), adversarial training using a strong attack has successfully improved robustness. Madry et al. (2017) showed that a PGD attack (which is a BIM attack (Kurakin et al., 2016) with an initial random step and projection) is a strong enough attack to achieve promising adversarial training results. We will refer to this training method as PGD adversarial training. PGD adversarial training achieves good robustness on bounded attacks for MNIST (LeCun et al., 1998) and acceptable robustness on CIFAR-10 (Krizhevsky & Hinton, 2009) classifiers. Tsipras et al. (2018) show that adversarial training with strong PGD adversaries has many benefits in addition to robustness. They also state that while adversarial training may improve generalization in regimes where training data is limited (especially on MNIST), it may be at odds with generalization in regimes where data is available. This trade-off was also recently studied by Zhang et al. (2019), Su et al. (2018), and Shafahi et al. (2019a). While, to the best of our knowlegde, the transferability of robustness has not been studied in depth, Hendrycks et al. (2019) studied the case of adversarially training models that were pre-trained on different domains. Our work is fundamentally different in that we seek to transfer robustness without resorting to costly and data-hungry adversarial training. We train the target model on natural examples only, which allows us to directly study how well robustness transfers. Additionally, this allows us to have better generalization and achieve higher accuracy on validation examples. While as Hendrycks et al. (2019) state, fine-tuning on adversarial examples built for the target domain can improve robustness of relatively large datasets such as CIFAR-10 and CIFAR-100 compared to adversarial training from scratch on the target domain, we show that in the regimes of limited data (where transfer learning is more common), adversarially robust transfer learning can lead to better results measured in terms of both robustness and clean validation accuracy. 1By default we will use the `∞-norm in this paper. 3 THE ROBUSTNESS OF DEEP FEATURES In this section, we explore the robustness of different network layers, and demonstrate that robust networks rely on robust deep features. To do so, we start from robust classifiers (c(θr)) for the CIFAR-100 and CIFAR-10 datasets (Krizhevsky & Hinton, 2009), and update θ by training on natural examples. In each experiment, we re-initialize the last k layers/blocks of the network, and re-train just those layers. We start by re-initializing just the last layer, then the last two, and so on until we re-initialize all the layers. We use the adversarially trained Wide-ResNet 32-10 (Zagoruyko & Komodakis, 2016) for CIFAR10 from Madry et al. (2017) as our robust model for CIFAR-10. We also adversarially train our own robust classifier for CIFAR-100 using the code from Madry et al. (2017). To keep things consistent, we use the same hyper-parameters used by Madry et al. (2017) for adversarially training CIFAR-10 to adversarially train the CIFAR-100 model.2 The performance of the CIFAR-10 and CIFAR-100 models on natural and adversarial examples are summarized in Table 1. To measure robustness, we evaluate the models on adversarial examples built using PGD attacks. We break the WRN 32-10 model into 17 blocks, which are depicted in Fig. 2. In each experiment, we first re-initialize the k deepest blocks (blocks 1 through k) and then train the parameters of those blocks on natural images3. We train for 20,000 iterations using Momentum SGD and a learning rate of 0.001. We then incrementally unfreeze and train more blocks. For each experiment, we evaluate the newly trained model’s accuracy on validation adversarial examples built with a 20-step PGD `∞ attack with = 8. Fig. 1 shows that robustness does not drop if only the final layers of the networks are re-trained on natural examples. In fact, there is a slight increase in robustness compared to the baseline PGD7 adversarially trained models when we just retrain the last batch-normalization block and fully connected block. As we unfreeze and train more blocks, the network’s robustness suddenly drops. This leads us to believe that a hardened network’s robustness is mainly due to robust deep feature representations and robustness is preserved if we re-train on top of deep features. Now that we have identified feature extractors as a source of robustness, it is natural to investigate whether robustness is preserved when transfer learning using robust feature extractors. We will 2We adv. train the WRN 32-10 on CIFAR-100 using a 7-step `∞ PGD attack with step-size=2 and = 8. We train for 80,000 iterations with a batch-size of 128. 3In this experiment, we use standard data augmentation techniques. study two different approaches for transferring robustness across datasets: one in which only the last layer is re-trained, and one with end-to-end re-training. 4 TRANSFER LEARNING: RECYCLING FEATURE EXTRACTORS We study how robustness transfers when the feature extractor layers of the source network are frozen, and we retrain only the last fully connected layer (i.e. the classification layer) for the new task. Formally, the transfer learning objective is: min w l(z(x, θ∗), y, w) (2) where z is the deep feature extractor function with pre-trained and now “frozen” parameters θ∗, and w represents the trainable parameters of the last fully connected layer. To investigate how well robustness transfers, we use two source models: one that is hardened by adversarial training and another that is naturally trained. We use models trained on CIFAR-100 as source models and perform transfer learning from CIFAR100 to CIFAR-10. The results are summarized in Table 2. Compared to adversarial/natural training the target model, transferring from a source model seems to result in a drop in natural accuracy (compare first row of Table 1 to the first row of Table 2). This difference is wider when the source and target data distributions are dissimilar (Yosinski et al., 2014). To evaluate our method on two datasets with more similar attributes, we randomly partition CIFAR100 into two disjoint subsets where each subset contains images corresponding to 50 classes. Table 2 shows the accuracy of transferring from one of the disjoint sets to the other (second row) and to the same set (third row). We can compare results of transfer learning with adversarial training on CIFAR-100 by averaging the results in the second and third rows of Table 2 to get the accuracy across all 100 classes of CIFAR-100.4 By doing so, we see that the accuracy of the transferred classifier matches that of the adversarially trained one, even though no adversarial training took place in the target domain. For completeness, we have also included experiments where we use CIFAR-10 as the source and CIFAR-100 as the target domain. We make the following observations from the transfer-learning results in Table 2. 1) robustness transfers: when the source model used for transfer learning is robust, the target model is also robust (although less so than the source), 2) robustness transfers between models that are more similar: If 4The robust CIFAR-100 classifier has 59.87% validation accuracy and 22.76% accuracy on PGD-20 adversarial examples. The average validation accuracy of the two half-CIFAR-100 classifiers on validation examples is 64.96%+58.48% 2 = 61.72% while the average robustness is 25.16%+15.86% 2 = 20.51%. the source and target models are trained on datasets which have similar distributions (and number of classes), robustness transfers better, and 3) validation accuracy is worst if we use a robust model as the source compared to using a conventionally trained source model: if the source model is naturally trained, the natural validation accuracy is better, although the target model is then vulnerable to adversarial perturbations. 4.1 TRANSFER LEARNING WITH IMAGENET MODELS Transfer learning using models trained on ImageNet (Russakovsky et al., 2015) as the source is a common practice in industry because ImageNet feature extractors are powerful and expressive. In this section we evaluate how well robustness transfers from these models. 4.1.1 TRANSFER LEARNING USING IMAGENET Starting from both a natural and robust ImageNet model, we perform the same set of experiments we did in section 4. Robust ImageNet models do not withstand untargeted `∞ attacks using as large an as those that can be used for simpler datasets like CIFAR. Following the method Shafahi et al. (2019b), we “free train” a robust ResNet-50 on ImageNet using replay hyper-parameter m = 4. The hardened ImageNet classifier withstands attacks bounded by = 5. Our robust ImageNet achieves 59.05% top-1 accuracy and roughly 27% accuracy against PGD-20 `∞ = 5 attacks on validation examples. We experiment with using this robust ImageNet model and a conventionally trained ResNet-50 ImageNet model as the source models. Using the ImageNet source models, we train CIFAR classifiers by retraining the last layer on natural CIFAR examples. We up-sample the 32×32-dimensional CIFAR images to 224×224 before feeding them into the ResNet-50 source models that are trained on ImageNet. For evaluation purposes, we also train robust ResNet-50 models from scratch using (Shafahi et al., 2019b) for the CIFAR models. To ensure that the transfer learning models and the end-to-end trained robust models have the same capacity and dimensionality, we first upsample the CIFAR images before feeding them to the ResNet-50 model. To distinguish between the common case of training ResNet models on CIFAR images that are 32 × 32-dimensional, we call our models that are trained on the upsampled CIFAR datasets the upsample-first ResNets or “u-ResNets”. Table 3 illustrates that using a robust ImageNet model as a source results in high validation accuracy for the transferred CIFAR target models. Also, given that the ImageNet classifier by itself is 27% robust, the CIFAR-10 model maintains the majority of that 27% robustness. When we compare the end-to-end hardened classifiers (robust u-ResNets) with the transferred classifier, we can see that while the robustness is less for the transferred case, transferred models result in considerably better performance on clean validation examples. 4.2 LOW-DATA REGIME As touched on before, transfer learning is more common in situations where the number of training points in the target domain is limited. Up until now, as a proof of concept, we have illustrated the majority of our experiments on the CIFAR target domains where we have many training points perclass. Hendrycks et al. (2019) show that starting from a pre-trained robust ImageNet model and fine-tuning on adversarial examples of the CIFAR domain can improve robustness beyond that of simply adversarial training CIFAR. Here, we illustrate the effect of training data size on robustness and natural performance by running various experiments on subsets of CIFAR-100 where we vary the number of training points per-class (N ). We compare three different hardening methods: (1) Free-training/adversarial training the target domain (Shafahi et al., 2019b); (2) fine-tuning using adversarial examples of the target task starting from the Free-4 robust ImageNet model similar to (Hendrycks et al., 2019); and (3) training a fully connected layer on top of the frozen feature extractors of the Free-4 robust ImageNet model using natural examples from the target task. For comparing the three different approaches, we look at three metrics: (a) clean validation accuracy; (b) robustness against PGD-20 validation adversarial examples; and (c) average of robustness and clean performance (((a)+(b))/2.) The results are summarized in Fig. 3. In the regimes where transfer learning is more common, adversarially robust transfer learning results in the best overall performance. Adversarially/Free training the target domain results in less robustness and validation accuracy compared to fine-tuning which highlights the importance of pre-training (Hendrycks et al., 2019). Note that in terms of computational resources required, the cost of fine-tuning on adversarial examples of the target domain is about k× our method since it requires generation of adversarial examples using k-step PGD attacks (we set k = 3). 4.2.1 TRAINING DEEPER NETWORKS ON TOP OF ROBUST FEATURE EXTRACTORS The basic transfer learning setting of section 4.1.1 only re-trains one layer for the new task. In section 4.1.1, when we transferred from the robust ImageNet to CIFAR-100, the natural training accuracy was 88.84%. Given the small number of trainable parameters left for the network (≈ 2048 × 100) and the fixed feature extractor, the network was not capable of completely fitting the training data. This means that there is potential to improve natural accuracy by learning more complex non-linear features and increasing the number of trainable parameters. To increase representation capacity and the number of trainable parameters, instead of training a 1- layer network on top of the feature extractor, we train a multi-layer perceptron (MLP) network on top of the robust feature extractor. To keep things simple and prevent bottle-necking, every hidden layer we add has 2048 neurons. We plot the training and validation accuracies on the natural examples and the robustness (i.e. PGD-20 validation accuracy) in Fig. 4 for various numbers of hidden layers. As can be seen, adding one layer is enough to achieve 100% training accuracy. However, doing so does not result in an increase in validation accuracy. To the contrary, adding more layers can result in a slight drop in validation accuracy due to overfitting. As illustrated, we can improve generalization using simple but effective methods such as dropout (Srivastava et al., 2014) (with probability 0.25) and batch-normalization (Ioffe & Szegedy, 2015). However, the most interesting behavior we observe in this experiment is that, as we increase the number of hidden layers, the robustness to PGD-20 attacks improves. Note, this seems to happen even when we transfer from a naturally trained ImageNet model. While for the case where we have no hidden layers, robustness is 0.00% on CIFAR100 when we use a naturally trained ImageNet model as source, if our MLP has 1, 2, 3, or 5 hidden layers, our robustness against PGD attacks would be 0.03%, 0.09%, 0.31% and 6.61%, respectively. This leads us to suspect that this behavior may be an artifact of vanishing gradients for adversary as the softmax loss saturates when the data is fit perfectly (Athalye et al., 2018). Therefore, for this case we change our robustness measure and use the CW attack (Carlini & Wagner, 2017) which will encounter fewer numerical issues because its loss function does not have a softmax component and does not saturate. Attacking the model from the natural source with CW-20 completely breaks the model and achieves 0.00% robustness. Most interestingly, attacking the model transferred from a robust source using the CW objective maintains robustness even when the number of hidden layers increases. 5 ANALYSIS: ROBUST FEATURE EXTRACTORS ARE FILTERS Our experiments suggest that the robustness of neural networks arises in large part from the presence of robust feature extractors. We have used this observation to transfer both robustness and accuracy between domains using transfer-learning. However, we have not yet fully delved into what it means to have a robust feature extractor. Through visualizations, Tsipras et al. (2018) studied how adversarial training causes the image gradients of neural networks to exhibit meaningful generative behavior. In other words, adversarial perturbations on hardened networks “look like” the class into which the image is perturbed. Given that optimization-based attacks build adversarial examples using the image gradient, we also visualize the image gradients of our transferred models to see if they exhibit the same generative behavior as adversarially trained nets. Fig. 5 plots the gradient of the loss w.r.t. the input image for models obtained by re-training only the last layer, and also for the case where we train MLPs on top of a robust feature extractor. The gradients for the transfer-learned models with a robust source are interpretable and “look like” the adversarial object class, while the gradients of models transferred from a natural source do not. This interpretatbility comes despite the fact that the source model was hardened against attacks on one dataset, and the transferred model is being tested on object classes from another. Also, we see that adding more layers on top of the feature extractor, which often leads to over-fitting, does not make gradients less interpretable. This latter observation is consistent with our observation that added layers preserve robustness(Fig. 4). These observations, together with the success of robust transfer learning, leads us to speculate that a robust model’s feature extractors act as a “filter” that ignores irrelevant parts of the image. Figure 5: Gradients of the loss w.r.t to input images for the CIFAR-100 transfer learning experiments of sections 4.1.1 & 4.2.1. The top row contains sample CIFAR-100 images. Other rows contain image gradients of the model loss. The second row is for a model transferred from a naturally trained ImageNet source. Rows 3-5 are for models transferred from a robust ImageNet source. These rows correspond to an MLP with 0 (row 3), 1 (row 4), and 2 (row 5) hidden layers on top of the robust feature extractor. The gradients in the last three rows all show interpretable generative behavior. 6 END-TO-END TRAINING WITHOUT FORGETTING As discussed in section 4, transfer learning can preserve robustness of the robust source model. However, it comes at the cost of decreased validation accuracy on natural examples compared to the case where we use a naturally trained source model. Consequently, there seems to be a trade-off between generalization and robustness based on the choice of the source model. For any given classifer, the trade-off between generalization and robustness is the subject of recent research (Tsipras et al., 2018; Zhang et al., 2019; Shafahi et al., 2019a). In this section, we intend to improve the overall performance of classifiers transferred from a robust source model by improving their generalization on natural images. To do so, unlike previous sections where we froze the feature extractor mainly to preserve robustness, we fine tune the feature extractor parameters θ. Ideally, we should learn to perform well on the target dataset without catastrophically forgetting the robustness of the source model. To achieve this, we utilize lifelong learning methods. Learning without Forgetting (LwF) (Li & Hoiem, 2018) is a method for overcoming catastrophic forgetting. The method is based on distillation. In this framework, we train the target model with a loss that includes a distillation term from the previous model. min w,θ l(z(x, θ), y, w) + λd · d(z(x, θ), z0(x, θ∗r)) (3) where, in our method, λd is the feature representation similarity penalty, and d is some distance metric between the robust model’s feature representations z0(x, θ∗r) and the current model’s feature representations z(x, θ). Unlike the original LwF paper that used a distilled loss from Hinton et al. (2015) and applies distillation to the logits, we simply choose d to be the `2-norm and apply distillation to the penultimate layer5. Our loss is designed to make the feature representations of the source and target network similar, thus preserving the robust feature representations (Fig. 6). Ideally, z(x, θ) ≈ z(x, θ∗r). To speed up training, given robust feature extractor parameters θ∗r , we store z0(x, θ∗r) for the images of the target task and load this from memory (i.e. offline) instead of performing a forward pass through the robust source network online. Therefore, in the experiments related to LwF, we do not train with data augmentation because we have not pre-computed z(xa, θ ∗ r), where xa is the augmented image. Empirically we verified that d(z(x, θ ∗ r), z(xa, θ ∗ r)) was not negligible6. To improve performance, we follow a warm-start scheme and only train the fully connected parameters w early in training. We then cut the learning rate and continue fine tuning both feature extractors (θ) and w. In our experiments, we use a learning rate of 0.001, and the warm-start makes up half of the total training iterations. Starting from the pre-trained source model, we train for a total of 20,000 iterations with batch-size 128. The results with an adversarially trained CIFAR-100 model as source and CIFAR-10 as target are summarized in Table 4.7 As can be seen, having a LwF-type regularizer helps in maintaining robustness and also results in a considerable increase in validation accuracy. The trade-off between robustness and generalization can be controlled by the choice of λd. It seems that for some choices of λd such as 0.1, robustness also increases. However, in hindsight, the increase in accuracy on PGD-20 adversarial examples is not solely due to improvement in robustness. It is due to the fact that the validation accuracy has increased and we have a better classifier overall. For easier comparisons, we have provided the transfer results without LwF at the bottom of Table 4. Note that using LwF, we can keep the robustness of the source model and also achieve clean validation accuracy comparable to a model that uses naturally trained feature extractors. In the supplementary, we show that similar conclusions can be drawn for the split CIFAR-100 task. We demonstrated in our transfer experiments that using our LwF-type loss, can help decrease the generalization gap while preserving robustness. In this section, we assume that the source domain is the adversarial example domain of a dataset and the target domain is the clean example domain of the same dataset. This experiment can be seen as applying transfer learning from the adversarial example domain to the natural example domain while preventing forgetting the adversarial domain. In the case where the source and target datasets are the same (Transferring from a robust CIFAR-100 model to CIFAR-100), by applying our LwF-type loss, we can improve the generalization of robust models. Our results are summarized in Table 5. 5We do so since in section 3 we found the source of robustness to be the feature extractors and this observation was later reinforced due to the empirical results in section 4 6The analysis is in the supplementary. 7Source code for LwF-based experiments: https://github.com/ashafahi/RobustTransferLWF 7 CONCLUSION We identified the feature extractors of adversarially trained models as a source of robustness, and use this observation to transfer robustness to new problems domains without adversarial training. While transferring from a natural model can achieve higher validation accuracy in comparison to transferring from a robust model, we can close the gap and maintain the initial transferred robustness by borrowing ideas from the lifelong learning literature. The success of this methods suggests that a robust feature extractor is effectively a filter that sifts out relevant components of an image that are needed to assign class labels. We hope that the insights from this study enable practitioners to build robust models in situations with limited labeled training data, or when the cost and complexity of adversarial training from scratch is untenable. Acknowledgements: Goldstein and his students were supported by the DARPA QED for RML program, the DARPA GARD program, and the National Science Foundation. A EXPERIMENT DETAILS A.1 LWF-BASED EXPERIMENTS In our LWF-based experiments, we use a batch-size of 128, a fixed learning-rate of 1e-2m, and fine-tune for an additional 20,000 iterations. The first 10,000 iterations are used for warm-start; during which we only update the final fully connected layer’s weights. During the remaining 10,000 iterations, we update all of the weights but do not update the batch-normalization parameters. A.2 IMAGENET TO CIFAR EXPERIMENTS When freezing the feature extractor and fine-tuning on adversarial examples, we train the last fully connected layer’s weights for 50 epochs using batch-size=128. We start with an initial learning rate of 0.01 and drop the learning rate to 0.001 at epoch 30. In the case of fine-tuning on adversarial examples, we generate the adversarial examples using a 3 step PGD attack with step-size 3 and a perturbation bound = 5. A.3 FREE TRAINING EXPERIMENTS In all of our free-training experiments where we train the u-ResNet-50, we train for 90 epochs using a batch-size of 128. The initial learning rate used is 0.1 and we drop it by a factor of 10 at epochs 30 and 60. We use a replay parameter m = 4 and perturbation bound = 5. B THE DISTANCE BETWEEN FEATURE REPRESENTATIONS OF NATURAL IMAGES AND AUGMENTED IMAGES To speed up the LwF experiments, we did not use data augmentation during training. Instead of computing the robust feature representations on the fly, before starting training on the new target task, we passed the entire training data of the target task through the robust network and stored the feature representation vector. If we were doing data augmentation, we would have to pass the entire augmented training data through the network, which would be slow and memory intensive. Alternatively, we could use the robust feature representation of the non-augmented images instead. The latter would have been feasible if the distance between the robust feature representations of the non-augmented and augmented images were very small. However, as shown in fig 7, this quantity is not often negligible. C LOW-DATA REGIME TRANSFER LEARNING FROM IMAGENET TO CIFAR-10 In section 4.2, we illustrated that robust transfer learning is most beneficial where we have limited data (i.e., limited number of training data per class). We used CIFAR-100 as an illustrative target dataset. However, the overall results are not data set dependent. When we have limited number of training data, robust transfer learning results in the highest overall performance (see Fig. 8 for transferring from a robust ImageNet model to smaller instances of CIFAR-10). D LWF-BASED ROBUST TRANSFER LEARNING FOR SIMILAR SOURCE AND TARGET DATASETS In Table 6 we conduct LwF experiments on the split CIFAR-100 task which is more suited for transfer learning due to the similarities between the source and target datasets. In these situations, the LwF regularizer on the feature representations still works and can improve generalization without becoming vulnerable to adversarial examples. If we take the average performance of the robust classifiers on the split tasks (average of robust half CIFAR-100 and the LwF setting model for λd = 0.01) we get (63.32 + 64.96)/2 = 64.14% average validation accuracy and 20.42% average robustness which is comparable with the case that we had adversarially trained the entire CIFAR-100 dataset (Table 1). E IMPROVING GENERALIZATION OF THE CIFAR-10 ADVERSARIALLY TRAINED MODEL Similar to the case of improving the generalization for CIFAR-100, we use our LwF-based loss function to transfer from the robust CIFAR-10 domain to the natural CIFAR-10 domain. We summarize the results in Table 7.
1. What is the main contribution of the paper regarding transfer learning and adversarial robustness? 2. What are the strengths of the proposed approach, particularly in preserving robustness and improving validation accuracy? 3. How does the reviewer assess the significance of the individual contributions, such as retraining the last layer and transferring learned robust representation? 4. Can you provide more details about the baselines considered in the experiments? 5. How does the paper address the problem of computational efficiency and sample intensity in training robust models?
Review
Review The paper studies transfer learning from the point of view of adversarial robustness. The goal is, given a robust deep neural network classifier for a source domain, learn a robust classifier for a target domain as efficiently and with as few samples as possible. The authors empirically evaluate different strategies and compare with relevant baselines. At a high level, the paper addresses an interesting problem. Robust models are quite computationally and sample intensive to train, so exploring pre-training is a reasonable way to deal with small datasets or computational constraints. The authors perform a diverse set of experiments from which I identified the following individual contributions: a) Retraining the last layer of the model on natural examples preserves robustness. Robustness degrades smoothly when pre-training progressively more layers. This is an interesting contribution providing evidence that robust models do learn in fact robust input representations/features. b) Transferring a learned robust representation from a source to a target domain preserves its robustness (training a linear layer on top of it leads to a robust classifier). This provides further evidence that robust models learn _general purpose_ robust features of the input, while establishing robust pre-training as a valid strategy for cheaper robust models. The baselines considered are: 1) adversarial training on target domain which always underperforms the proposed method, 2) fine-tuning on adversarial samples from the target domain which performs better when there are a lot of samples from that domain and worse when there are only a few (this method is also more computationally expensive than transfer learning). c) The "perceptually-aligned" saliency maps of Tsipras et al. 2018 are also a property of robust models obtained through transfer learning. This illustrates that these saliency maps can also arise for out-of-distribution inputs and hence are likely to correspond to general, high-level features. d) Fine-tuning all layers of the model while ensuring that the representations stay close to the original ones for natural examples can lead to transfer with improved validation accuracy and robustness (even when the source and target domains are the same). This is an interesting improvement over the simpler transfer methods producing competitive results. Overall, the paper contains an experimental study that, in my opinion, is thorough, presents interesting findings, and contains the necessary ablations. I believe that this paper would be of interest to the adversarial ML community and I hence recommend acceptance.
ICLR
Title Adversarially robust transfer learning Abstract Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of “fine tuning” a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness. 1 INTRODUCTION Deep neural networks achieve human-like accuracy on a range of tasks when sufficient training data and computing power is available. However, when large datasets are unavailable for training, or pracitioners require a low-cost training strategy, transfer learning methods are often used. This process starts with a source network (pre-trained on a task for which large datasets are available), which is then re-purposed to act on the target problem, usually with minimal re-training on a small dataset (Yosinski et al., 2014; Pan & Yang, 2009). While transfer learning greatly accelerates the training pipeline and reduces data requirements in the target domain, it does not address the important issue of model robustness. It is well-known that naturally trained models often completely fail under adversarial inputs (Biggio et al., 2013; Szegedy et al., 2013). As a result, researchers and practitioners often resort to adversarial training, in which adversarial examples are crafted on-the-fly during network training and injected into the training set. This process greatly exacerbates the problems that transfer learning seeks to avoid. The high cost of creating adversarial examples increases training time (often by an order of magnitude or more). Furthermore, robustness is known to suffer when training on a small dataset (Schmidt et al., 2018). To make things worse, high-capacity models are often needed to achieve good robustness (Madry et al., 2017; Kurakin et al., 2016; Shafahi et al., 2019b), but these models may over-fit badly on small datasets. CONTRIBUTIONS The purpose of this paper is to study the adversarial robustness of models produced by transfer learning. We begin by observing that robust networks contain robust feature extractors, which are resistant to adversarial perturbations in different domains. Such robust features can be used ∗equal contribution †University of Maryland ‡Cornell University as a basis for semi-supervised transfer learning, which only requires re-training the last layer of a network. To demonstrate the power of robust transfer learning, we transfer a robust ImageNet source model onto the CIFAR domain, achieving both high accuracy and robustness in the new domain without adversarial training. We use visualization methods to explore properties of robust feature extractors. Then, we consider the case of transfer of learning by “fine-tuning.” In this case, the source network is re-trained end-to-end using a small number of epochs on the target domain. Unfortunately, this end-to-end process does not always retain the robustness of the source domain; the network “forgets” the robust feature representations learned on the source task. To address this problem, we use recently proposed lifelong learning methods that prevent the network from forgetting the robustness it once learned. Using our proposed methods, we construct robust models that generalize well. In particular, we improve the generalization of a robust CIFAR-100 model by roughly 2% while preserving its robustness. 2 BACKGROUND Adversarial examples fall within the category of evasion attacks—test-time attacks in which a perturbation is added to a natural image before inference. Adversarial attacks are most often crafted using a differentiable loss function that measures the performance of a classifier on a chosen image. In the case of norm-constrained attacks (which form the basis of most standard benchmark problems), the adversary solves max δ l(x+ δ, y, θ) s.t. ‖δ‖p ≤ , (1) where θ are the (already trained and frozen) parameters of classifier c(x, θ)→ ŷ that maps an image to a class, l is the proxy loss used for classification (often cross-entropy), δ is the image perturbation, (x, y) is the natural image and its true class, and ||.||p is some `p-norm1. The optimization problem in Eq. 1 aims to find a bounded perturbation that maximizes the cross-entropy loss given the correct label. There are many variants of this process, including DeepFool (Moosavi-Dezfooli et al., 2016), L-BFGS (Szegedy et al., 2013), and CW (Carlini & Wagner, 2017). Many researchers have studied methods for building a robust network which have been later shown to be ineffective when attacked with stronger adversaries (Athalye et al., 2018). Adversarial training (Szegedy et al., 2013) is one of the defenses that was not broken by Athalye et al. (2018). While adversarial training using a weak adversary such as the FGSM attack (Goodfellow et al., 2015) can be broken even by single step attacks which add a simple random step prior to the FGSM step (Tramèr et al., 2017), adversarial training using a strong attack has successfully improved robustness. Madry et al. (2017) showed that a PGD attack (which is a BIM attack (Kurakin et al., 2016) with an initial random step and projection) is a strong enough attack to achieve promising adversarial training results. We will refer to this training method as PGD adversarial training. PGD adversarial training achieves good robustness on bounded attacks for MNIST (LeCun et al., 1998) and acceptable robustness on CIFAR-10 (Krizhevsky & Hinton, 2009) classifiers. Tsipras et al. (2018) show that adversarial training with strong PGD adversaries has many benefits in addition to robustness. They also state that while adversarial training may improve generalization in regimes where training data is limited (especially on MNIST), it may be at odds with generalization in regimes where data is available. This trade-off was also recently studied by Zhang et al. (2019), Su et al. (2018), and Shafahi et al. (2019a). While, to the best of our knowlegde, the transferability of robustness has not been studied in depth, Hendrycks et al. (2019) studied the case of adversarially training models that were pre-trained on different domains. Our work is fundamentally different in that we seek to transfer robustness without resorting to costly and data-hungry adversarial training. We train the target model on natural examples only, which allows us to directly study how well robustness transfers. Additionally, this allows us to have better generalization and achieve higher accuracy on validation examples. While as Hendrycks et al. (2019) state, fine-tuning on adversarial examples built for the target domain can improve robustness of relatively large datasets such as CIFAR-10 and CIFAR-100 compared to adversarial training from scratch on the target domain, we show that in the regimes of limited data (where transfer learning is more common), adversarially robust transfer learning can lead to better results measured in terms of both robustness and clean validation accuracy. 1By default we will use the `∞-norm in this paper. 3 THE ROBUSTNESS OF DEEP FEATURES In this section, we explore the robustness of different network layers, and demonstrate that robust networks rely on robust deep features. To do so, we start from robust classifiers (c(θr)) for the CIFAR-100 and CIFAR-10 datasets (Krizhevsky & Hinton, 2009), and update θ by training on natural examples. In each experiment, we re-initialize the last k layers/blocks of the network, and re-train just those layers. We start by re-initializing just the last layer, then the last two, and so on until we re-initialize all the layers. We use the adversarially trained Wide-ResNet 32-10 (Zagoruyko & Komodakis, 2016) for CIFAR10 from Madry et al. (2017) as our robust model for CIFAR-10. We also adversarially train our own robust classifier for CIFAR-100 using the code from Madry et al. (2017). To keep things consistent, we use the same hyper-parameters used by Madry et al. (2017) for adversarially training CIFAR-10 to adversarially train the CIFAR-100 model.2 The performance of the CIFAR-10 and CIFAR-100 models on natural and adversarial examples are summarized in Table 1. To measure robustness, we evaluate the models on adversarial examples built using PGD attacks. We break the WRN 32-10 model into 17 blocks, which are depicted in Fig. 2. In each experiment, we first re-initialize the k deepest blocks (blocks 1 through k) and then train the parameters of those blocks on natural images3. We train for 20,000 iterations using Momentum SGD and a learning rate of 0.001. We then incrementally unfreeze and train more blocks. For each experiment, we evaluate the newly trained model’s accuracy on validation adversarial examples built with a 20-step PGD `∞ attack with = 8. Fig. 1 shows that robustness does not drop if only the final layers of the networks are re-trained on natural examples. In fact, there is a slight increase in robustness compared to the baseline PGD7 adversarially trained models when we just retrain the last batch-normalization block and fully connected block. As we unfreeze and train more blocks, the network’s robustness suddenly drops. This leads us to believe that a hardened network’s robustness is mainly due to robust deep feature representations and robustness is preserved if we re-train on top of deep features. Now that we have identified feature extractors as a source of robustness, it is natural to investigate whether robustness is preserved when transfer learning using robust feature extractors. We will 2We adv. train the WRN 32-10 on CIFAR-100 using a 7-step `∞ PGD attack with step-size=2 and = 8. We train for 80,000 iterations with a batch-size of 128. 3In this experiment, we use standard data augmentation techniques. study two different approaches for transferring robustness across datasets: one in which only the last layer is re-trained, and one with end-to-end re-training. 4 TRANSFER LEARNING: RECYCLING FEATURE EXTRACTORS We study how robustness transfers when the feature extractor layers of the source network are frozen, and we retrain only the last fully connected layer (i.e. the classification layer) for the new task. Formally, the transfer learning objective is: min w l(z(x, θ∗), y, w) (2) where z is the deep feature extractor function with pre-trained and now “frozen” parameters θ∗, and w represents the trainable parameters of the last fully connected layer. To investigate how well robustness transfers, we use two source models: one that is hardened by adversarial training and another that is naturally trained. We use models trained on CIFAR-100 as source models and perform transfer learning from CIFAR100 to CIFAR-10. The results are summarized in Table 2. Compared to adversarial/natural training the target model, transferring from a source model seems to result in a drop in natural accuracy (compare first row of Table 1 to the first row of Table 2). This difference is wider when the source and target data distributions are dissimilar (Yosinski et al., 2014). To evaluate our method on two datasets with more similar attributes, we randomly partition CIFAR100 into two disjoint subsets where each subset contains images corresponding to 50 classes. Table 2 shows the accuracy of transferring from one of the disjoint sets to the other (second row) and to the same set (third row). We can compare results of transfer learning with adversarial training on CIFAR-100 by averaging the results in the second and third rows of Table 2 to get the accuracy across all 100 classes of CIFAR-100.4 By doing so, we see that the accuracy of the transferred classifier matches that of the adversarially trained one, even though no adversarial training took place in the target domain. For completeness, we have also included experiments where we use CIFAR-10 as the source and CIFAR-100 as the target domain. We make the following observations from the transfer-learning results in Table 2. 1) robustness transfers: when the source model used for transfer learning is robust, the target model is also robust (although less so than the source), 2) robustness transfers between models that are more similar: If 4The robust CIFAR-100 classifier has 59.87% validation accuracy and 22.76% accuracy on PGD-20 adversarial examples. The average validation accuracy of the two half-CIFAR-100 classifiers on validation examples is 64.96%+58.48% 2 = 61.72% while the average robustness is 25.16%+15.86% 2 = 20.51%. the source and target models are trained on datasets which have similar distributions (and number of classes), robustness transfers better, and 3) validation accuracy is worst if we use a robust model as the source compared to using a conventionally trained source model: if the source model is naturally trained, the natural validation accuracy is better, although the target model is then vulnerable to adversarial perturbations. 4.1 TRANSFER LEARNING WITH IMAGENET MODELS Transfer learning using models trained on ImageNet (Russakovsky et al., 2015) as the source is a common practice in industry because ImageNet feature extractors are powerful and expressive. In this section we evaluate how well robustness transfers from these models. 4.1.1 TRANSFER LEARNING USING IMAGENET Starting from both a natural and robust ImageNet model, we perform the same set of experiments we did in section 4. Robust ImageNet models do not withstand untargeted `∞ attacks using as large an as those that can be used for simpler datasets like CIFAR. Following the method Shafahi et al. (2019b), we “free train” a robust ResNet-50 on ImageNet using replay hyper-parameter m = 4. The hardened ImageNet classifier withstands attacks bounded by = 5. Our robust ImageNet achieves 59.05% top-1 accuracy and roughly 27% accuracy against PGD-20 `∞ = 5 attacks on validation examples. We experiment with using this robust ImageNet model and a conventionally trained ResNet-50 ImageNet model as the source models. Using the ImageNet source models, we train CIFAR classifiers by retraining the last layer on natural CIFAR examples. We up-sample the 32×32-dimensional CIFAR images to 224×224 before feeding them into the ResNet-50 source models that are trained on ImageNet. For evaluation purposes, we also train robust ResNet-50 models from scratch using (Shafahi et al., 2019b) for the CIFAR models. To ensure that the transfer learning models and the end-to-end trained robust models have the same capacity and dimensionality, we first upsample the CIFAR images before feeding them to the ResNet-50 model. To distinguish between the common case of training ResNet models on CIFAR images that are 32 × 32-dimensional, we call our models that are trained on the upsampled CIFAR datasets the upsample-first ResNets or “u-ResNets”. Table 3 illustrates that using a robust ImageNet model as a source results in high validation accuracy for the transferred CIFAR target models. Also, given that the ImageNet classifier by itself is 27% robust, the CIFAR-10 model maintains the majority of that 27% robustness. When we compare the end-to-end hardened classifiers (robust u-ResNets) with the transferred classifier, we can see that while the robustness is less for the transferred case, transferred models result in considerably better performance on clean validation examples. 4.2 LOW-DATA REGIME As touched on before, transfer learning is more common in situations where the number of training points in the target domain is limited. Up until now, as a proof of concept, we have illustrated the majority of our experiments on the CIFAR target domains where we have many training points perclass. Hendrycks et al. (2019) show that starting from a pre-trained robust ImageNet model and fine-tuning on adversarial examples of the CIFAR domain can improve robustness beyond that of simply adversarial training CIFAR. Here, we illustrate the effect of training data size on robustness and natural performance by running various experiments on subsets of CIFAR-100 where we vary the number of training points per-class (N ). We compare three different hardening methods: (1) Free-training/adversarial training the target domain (Shafahi et al., 2019b); (2) fine-tuning using adversarial examples of the target task starting from the Free-4 robust ImageNet model similar to (Hendrycks et al., 2019); and (3) training a fully connected layer on top of the frozen feature extractors of the Free-4 robust ImageNet model using natural examples from the target task. For comparing the three different approaches, we look at three metrics: (a) clean validation accuracy; (b) robustness against PGD-20 validation adversarial examples; and (c) average of robustness and clean performance (((a)+(b))/2.) The results are summarized in Fig. 3. In the regimes where transfer learning is more common, adversarially robust transfer learning results in the best overall performance. Adversarially/Free training the target domain results in less robustness and validation accuracy compared to fine-tuning which highlights the importance of pre-training (Hendrycks et al., 2019). Note that in terms of computational resources required, the cost of fine-tuning on adversarial examples of the target domain is about k× our method since it requires generation of adversarial examples using k-step PGD attacks (we set k = 3). 4.2.1 TRAINING DEEPER NETWORKS ON TOP OF ROBUST FEATURE EXTRACTORS The basic transfer learning setting of section 4.1.1 only re-trains one layer for the new task. In section 4.1.1, when we transferred from the robust ImageNet to CIFAR-100, the natural training accuracy was 88.84%. Given the small number of trainable parameters left for the network (≈ 2048 × 100) and the fixed feature extractor, the network was not capable of completely fitting the training data. This means that there is potential to improve natural accuracy by learning more complex non-linear features and increasing the number of trainable parameters. To increase representation capacity and the number of trainable parameters, instead of training a 1- layer network on top of the feature extractor, we train a multi-layer perceptron (MLP) network on top of the robust feature extractor. To keep things simple and prevent bottle-necking, every hidden layer we add has 2048 neurons. We plot the training and validation accuracies on the natural examples and the robustness (i.e. PGD-20 validation accuracy) in Fig. 4 for various numbers of hidden layers. As can be seen, adding one layer is enough to achieve 100% training accuracy. However, doing so does not result in an increase in validation accuracy. To the contrary, adding more layers can result in a slight drop in validation accuracy due to overfitting. As illustrated, we can improve generalization using simple but effective methods such as dropout (Srivastava et al., 2014) (with probability 0.25) and batch-normalization (Ioffe & Szegedy, 2015). However, the most interesting behavior we observe in this experiment is that, as we increase the number of hidden layers, the robustness to PGD-20 attacks improves. Note, this seems to happen even when we transfer from a naturally trained ImageNet model. While for the case where we have no hidden layers, robustness is 0.00% on CIFAR100 when we use a naturally trained ImageNet model as source, if our MLP has 1, 2, 3, or 5 hidden layers, our robustness against PGD attacks would be 0.03%, 0.09%, 0.31% and 6.61%, respectively. This leads us to suspect that this behavior may be an artifact of vanishing gradients for adversary as the softmax loss saturates when the data is fit perfectly (Athalye et al., 2018). Therefore, for this case we change our robustness measure and use the CW attack (Carlini & Wagner, 2017) which will encounter fewer numerical issues because its loss function does not have a softmax component and does not saturate. Attacking the model from the natural source with CW-20 completely breaks the model and achieves 0.00% robustness. Most interestingly, attacking the model transferred from a robust source using the CW objective maintains robustness even when the number of hidden layers increases. 5 ANALYSIS: ROBUST FEATURE EXTRACTORS ARE FILTERS Our experiments suggest that the robustness of neural networks arises in large part from the presence of robust feature extractors. We have used this observation to transfer both robustness and accuracy between domains using transfer-learning. However, we have not yet fully delved into what it means to have a robust feature extractor. Through visualizations, Tsipras et al. (2018) studied how adversarial training causes the image gradients of neural networks to exhibit meaningful generative behavior. In other words, adversarial perturbations on hardened networks “look like” the class into which the image is perturbed. Given that optimization-based attacks build adversarial examples using the image gradient, we also visualize the image gradients of our transferred models to see if they exhibit the same generative behavior as adversarially trained nets. Fig. 5 plots the gradient of the loss w.r.t. the input image for models obtained by re-training only the last layer, and also for the case where we train MLPs on top of a robust feature extractor. The gradients for the transfer-learned models with a robust source are interpretable and “look like” the adversarial object class, while the gradients of models transferred from a natural source do not. This interpretatbility comes despite the fact that the source model was hardened against attacks on one dataset, and the transferred model is being tested on object classes from another. Also, we see that adding more layers on top of the feature extractor, which often leads to over-fitting, does not make gradients less interpretable. This latter observation is consistent with our observation that added layers preserve robustness(Fig. 4). These observations, together with the success of robust transfer learning, leads us to speculate that a robust model’s feature extractors act as a “filter” that ignores irrelevant parts of the image. Figure 5: Gradients of the loss w.r.t to input images for the CIFAR-100 transfer learning experiments of sections 4.1.1 & 4.2.1. The top row contains sample CIFAR-100 images. Other rows contain image gradients of the model loss. The second row is for a model transferred from a naturally trained ImageNet source. Rows 3-5 are for models transferred from a robust ImageNet source. These rows correspond to an MLP with 0 (row 3), 1 (row 4), and 2 (row 5) hidden layers on top of the robust feature extractor. The gradients in the last three rows all show interpretable generative behavior. 6 END-TO-END TRAINING WITHOUT FORGETTING As discussed in section 4, transfer learning can preserve robustness of the robust source model. However, it comes at the cost of decreased validation accuracy on natural examples compared to the case where we use a naturally trained source model. Consequently, there seems to be a trade-off between generalization and robustness based on the choice of the source model. For any given classifer, the trade-off between generalization and robustness is the subject of recent research (Tsipras et al., 2018; Zhang et al., 2019; Shafahi et al., 2019a). In this section, we intend to improve the overall performance of classifiers transferred from a robust source model by improving their generalization on natural images. To do so, unlike previous sections where we froze the feature extractor mainly to preserve robustness, we fine tune the feature extractor parameters θ. Ideally, we should learn to perform well on the target dataset without catastrophically forgetting the robustness of the source model. To achieve this, we utilize lifelong learning methods. Learning without Forgetting (LwF) (Li & Hoiem, 2018) is a method for overcoming catastrophic forgetting. The method is based on distillation. In this framework, we train the target model with a loss that includes a distillation term from the previous model. min w,θ l(z(x, θ), y, w) + λd · d(z(x, θ), z0(x, θ∗r)) (3) where, in our method, λd is the feature representation similarity penalty, and d is some distance metric between the robust model’s feature representations z0(x, θ∗r) and the current model’s feature representations z(x, θ). Unlike the original LwF paper that used a distilled loss from Hinton et al. (2015) and applies distillation to the logits, we simply choose d to be the `2-norm and apply distillation to the penultimate layer5. Our loss is designed to make the feature representations of the source and target network similar, thus preserving the robust feature representations (Fig. 6). Ideally, z(x, θ) ≈ z(x, θ∗r). To speed up training, given robust feature extractor parameters θ∗r , we store z0(x, θ∗r) for the images of the target task and load this from memory (i.e. offline) instead of performing a forward pass through the robust source network online. Therefore, in the experiments related to LwF, we do not train with data augmentation because we have not pre-computed z(xa, θ ∗ r), where xa is the augmented image. Empirically we verified that d(z(x, θ ∗ r), z(xa, θ ∗ r)) was not negligible6. To improve performance, we follow a warm-start scheme and only train the fully connected parameters w early in training. We then cut the learning rate and continue fine tuning both feature extractors (θ) and w. In our experiments, we use a learning rate of 0.001, and the warm-start makes up half of the total training iterations. Starting from the pre-trained source model, we train for a total of 20,000 iterations with batch-size 128. The results with an adversarially trained CIFAR-100 model as source and CIFAR-10 as target are summarized in Table 4.7 As can be seen, having a LwF-type regularizer helps in maintaining robustness and also results in a considerable increase in validation accuracy. The trade-off between robustness and generalization can be controlled by the choice of λd. It seems that for some choices of λd such as 0.1, robustness also increases. However, in hindsight, the increase in accuracy on PGD-20 adversarial examples is not solely due to improvement in robustness. It is due to the fact that the validation accuracy has increased and we have a better classifier overall. For easier comparisons, we have provided the transfer results without LwF at the bottom of Table 4. Note that using LwF, we can keep the robustness of the source model and also achieve clean validation accuracy comparable to a model that uses naturally trained feature extractors. In the supplementary, we show that similar conclusions can be drawn for the split CIFAR-100 task. We demonstrated in our transfer experiments that using our LwF-type loss, can help decrease the generalization gap while preserving robustness. In this section, we assume that the source domain is the adversarial example domain of a dataset and the target domain is the clean example domain of the same dataset. This experiment can be seen as applying transfer learning from the adversarial example domain to the natural example domain while preventing forgetting the adversarial domain. In the case where the source and target datasets are the same (Transferring from a robust CIFAR-100 model to CIFAR-100), by applying our LwF-type loss, we can improve the generalization of robust models. Our results are summarized in Table 5. 5We do so since in section 3 we found the source of robustness to be the feature extractors and this observation was later reinforced due to the empirical results in section 4 6The analysis is in the supplementary. 7Source code for LwF-based experiments: https://github.com/ashafahi/RobustTransferLWF 7 CONCLUSION We identified the feature extractors of adversarially trained models as a source of robustness, and use this observation to transfer robustness to new problems domains without adversarial training. While transferring from a natural model can achieve higher validation accuracy in comparison to transferring from a robust model, we can close the gap and maintain the initial transferred robustness by borrowing ideas from the lifelong learning literature. The success of this methods suggests that a robust feature extractor is effectively a filter that sifts out relevant components of an image that are needed to assign class labels. We hope that the insights from this study enable practitioners to build robust models in situations with limited labeled training data, or when the cost and complexity of adversarial training from scratch is untenable. Acknowledgements: Goldstein and his students were supported by the DARPA QED for RML program, the DARPA GARD program, and the National Science Foundation. A EXPERIMENT DETAILS A.1 LWF-BASED EXPERIMENTS In our LWF-based experiments, we use a batch-size of 128, a fixed learning-rate of 1e-2m, and fine-tune for an additional 20,000 iterations. The first 10,000 iterations are used for warm-start; during which we only update the final fully connected layer’s weights. During the remaining 10,000 iterations, we update all of the weights but do not update the batch-normalization parameters. A.2 IMAGENET TO CIFAR EXPERIMENTS When freezing the feature extractor and fine-tuning on adversarial examples, we train the last fully connected layer’s weights for 50 epochs using batch-size=128. We start with an initial learning rate of 0.01 and drop the learning rate to 0.001 at epoch 30. In the case of fine-tuning on adversarial examples, we generate the adversarial examples using a 3 step PGD attack with step-size 3 and a perturbation bound = 5. A.3 FREE TRAINING EXPERIMENTS In all of our free-training experiments where we train the u-ResNet-50, we train for 90 epochs using a batch-size of 128. The initial learning rate used is 0.1 and we drop it by a factor of 10 at epochs 30 and 60. We use a replay parameter m = 4 and perturbation bound = 5. B THE DISTANCE BETWEEN FEATURE REPRESENTATIONS OF NATURAL IMAGES AND AUGMENTED IMAGES To speed up the LwF experiments, we did not use data augmentation during training. Instead of computing the robust feature representations on the fly, before starting training on the new target task, we passed the entire training data of the target task through the robust network and stored the feature representation vector. If we were doing data augmentation, we would have to pass the entire augmented training data through the network, which would be slow and memory intensive. Alternatively, we could use the robust feature representation of the non-augmented images instead. The latter would have been feasible if the distance between the robust feature representations of the non-augmented and augmented images were very small. However, as shown in fig 7, this quantity is not often negligible. C LOW-DATA REGIME TRANSFER LEARNING FROM IMAGENET TO CIFAR-10 In section 4.2, we illustrated that robust transfer learning is most beneficial where we have limited data (i.e., limited number of training data per class). We used CIFAR-100 as an illustrative target dataset. However, the overall results are not data set dependent. When we have limited number of training data, robust transfer learning results in the highest overall performance (see Fig. 8 for transferring from a robust ImageNet model to smaller instances of CIFAR-10). D LWF-BASED ROBUST TRANSFER LEARNING FOR SIMILAR SOURCE AND TARGET DATASETS In Table 6 we conduct LwF experiments on the split CIFAR-100 task which is more suited for transfer learning due to the similarities between the source and target datasets. In these situations, the LwF regularizer on the feature representations still works and can improve generalization without becoming vulnerable to adversarial examples. If we take the average performance of the robust classifiers on the split tasks (average of robust half CIFAR-100 and the LwF setting model for λd = 0.01) we get (63.32 + 64.96)/2 = 64.14% average validation accuracy and 20.42% average robustness which is comparable with the case that we had adversarially trained the entire CIFAR-100 dataset (Table 1). E IMPROVING GENERALIZATION OF THE CIFAR-10 ADVERSARIALLY TRAINED MODEL Similar to the case of improving the generalization for CIFAR-100, we use our LwF-based loss function to transfer from the robust CIFAR-10 domain to the natural CIFAR-10 domain. We summarize the results in Table 7.
1. What is the main contribution of the paper regarding robust transfer learning? 2. What are the strengths and weaknesses of the paper's approach to robust transfer learning? 3. How does the reviewer assess the significance and novelty of the proposed approach compared to prior works? 4. What are the limitations of the paper's experimental analysis, according to the reviewer? 5. How could the authors improve their methodological framework and analysis of robust transfer learning, particularly regarding distillation and feature extraction layers?
Review
Review Summary ------- This paper addresses the problem of performing robust transfer learning. A first contribution of the paper is to robust and classic training with respect to usual validation accuracy and robustness to adversarial attacks on the CIFAR task. Then, the same comparison is made on a transfer learning task. The transfer learning setting is then completed by studying transfer from ImageNet-based models with a particular attention to low-data regime and training deeper networks on top of the feature extractor. An analysis of robust features is provided and finally the authors studies the interest of Learning without Forgetting strategies to provide robust transfer. The tendency s to obtain the Best performance from robust-trained source models having a good validation accuracy. Overall ------ The paper presents a study of robust transfer learning that can be interesting for practitioners to know the type of results that can be obtained by robust transfer learning. However, I feel that the results obtained are rather expected and the paper does not provide some interesting methodological contribution that could help to develop robust transfer training. Comments --------- The results obtained in Section 3, 4 and 6 are rather expected and similar. I think that the paper could benefit by reducing these 3 sections in only one section where the results obtained can be summarized in one big table and two or three figures for example - the complete set of results can then be reported in the supplementary section. Then, if the contribution of the paper is to propose to focus on robust transfer learning including a Learning without Forgetting strategy, the authors should then focus more on this part and analyze better the behavior of learning. In particular, the combination between distillation and robust training is certainly interesting, and trying to propose a methodological framework for doing robust training in this context would certainly result in a more significant contribution. How to constrain the feature extraction layers, how to make use of them with distillation and additionally what are the additional contraints/additions that can be made to learning problem (3) to improve robust transfer are some important questions. So far, the contribution appears to me rather limited for ICLR. If we restrict to the part related to experimental comparisons made, they are restricted to particular trainings and datasets with specific PGD attacks. The contribution would have been stronger is different types of adversarial attacks with different parameters have been studied and analyzed.
ICLR
Title Towards Learning Implicit Symbolic Representation for Visual Reasoning Abstract Visual reasoning tasks are designed to test a learning algorithm’s capability to infer causal relationships, discover object interactions, and understand temporal dynamics, all from visual cues. It is commonly believed that to achieve compositional generalization on visual reasoning, an explicit abstraction of the visual scene must be constructed; for example, object detection can be applied to the visual input to produce representations that are then processed by a neural network or a neuro-symbolic framework. We demonstrate that a simple and general self-supervised approach is able to learn implicit symbolic representations with general-purpose neural networks, enabling the end-to-end learning of visual reasoning directly from raw visual inputs. Our proposed approach “compresses” each frame of a video into a small set of tokens with a transformer network. The self-supervised learning objective is to reconstruct each image based on the compressed temporal context. To minimize the reconstruction loss, the network must learn a compact representation for each image, as well as capture temporal dynamics and object permanence from temporal context. We evaluate the proposed approach on two visual reasoning benchmarks, CATER and ACRE. We observe that self-supervised pretraining is essential to achieve compositional generalization for our end-to-end trained neural network, and our proposed method achieves on par or better performance compared to recent neuro-symbolic approaches that often require additional object-level supervision. 1 INTRODUCTION This paper investigates if an end-to-end trained neural network is able to solve challenging visual reasoning tasks (Zhang et al., 2021; Girdhar & Ramanan, 2019; Yi et al., 2019) that involve inferring causal relationships, discovering object relations, and capturing temporal dynamics. A prominent approach (Shamsian et al., 2020) for visual reasoning is to construct a structured and interpretable representation from the visual inputs, and then apply symbolic programs (Mao et al., 2019) or neural networks (Ding et al., 2021) to solve the reasoning task. Despite their appealing properties, such as being interpretable and easier to inject expert knowledge into the learning framework, it is practically challenging to determine what types of symbols to use and how to detect them reliably from visual data. In fact, the suitable symbolic representation for a single scene may differ significantly across different tasks: the representation for modeling a single human’s kinematics (e.g. with body parts and joints) is unlikely to be the same as that for modeling group social behaviors (e.g. each pedestrian can be viewed as a whole entity). With the success of unified neural frameworks for multi-task learning (Bommasani et al., 2021), it is desirable to have a unified input interface (e.g. raw pixels) and have the neural network learn to dynamically extract suitable representations for different tasks. However, how to learn distributed representation with a deep neural network that behaves and generalizes similarly to learning methods based on symbolic representation (Zhang et al., 2021) for visual reasoning remains an open problem. The key hypothesis we make in this paper is that a general-purpose neural network, such as Transformers (Vaswani et al., 2017), can be turned into an implicit symbolic concept learner with selfsupervised pre-training. For reasoning with image and video cues, the concepts are often organized as object-centric, as objects usually serve as the basic units in visual reasoning tasks. Our proposed approach is inspired by the success of self-supervised learning of object detectors with neural networks (Burgess et al., 2019; Locatello et al., 2020; Niemeyer & Geiger, 2021) and the emergence of object masks in self-supervised classification networks (Caron et al., 2021). It is also motivated by concept binding in neuroscience (Treisman, 1996; Roskies, 1999; Feldman, 2013) and in machine learning (Greff et al., 2020), where concept binding for raw visual inputs refers to the process of segregating and representing visual scenes into a collection of (distributed) concept representation, which can be composed and utilized to solve downstream recognition and reasoning tasks. The concepts are bound in an object-centric fashion, where attributes (e.g. colors, shapes, sizes) of the same objects are associated via dynamic information routing. Different from explicit symbolic representation, implicit symbolic representation via dynamic information binding in a neural network does not require predefining the concept vocabulary or the construction of concept classifiers. The implicit representation can also be “finetuned” directly on the target tasks, it does not suffer from the early commitment or loss of information issues which may happen when visual inputs are converted into symbols and frozen descriptors (e.g. via object detection and classification). Our proposed representation learning framework, implicit symbolic concept learner (IS-CL) consists of two main components: first, a single image is compressed into a small set of tokens with a neural network. This is achieved by a vision transformer (ViT) network (Dosovitskiy et al., 2020) with multiple “slot” tokens (e.g. the [CLS] token in ViT) that attend to the image inputs. Second, the slot tokens are provided as context information via a temporal transformer network for other images in the same video, where the goal is to perform video reconstruction via the masked autoencoding (He et al., 2022) objective with the temporal context. Despite its simplicity, the reconstruction objective motivates the emergence of two desired properties in the pretrained network: first, to provide context useful for video reconstruction, the image encoder must learn a compact representation of the scene with its slot tokens. Second, to utilize the context cues, the temporal transformer must learn to associate objects and their implicit representation across time (“implicit tracking”), and also capture the notion of object permanence – the existence of an object even when it is occluded from the visual observations. One intuitive way to view our proposed IS-CL framework is from the perspective of Slot Attention model by Locatello et al. (2020): Instead of using a shared slot attention module to iteratively refine the encoded tokens, our image encoder is implemented as a stack of Transformer encoder layers with dedicated “slot” tokens. This generalization enables us to directly transfer the pretrained implicit symbolic representation encoded by expressive ViT backbones directly to downstream reasoning tasks. To validate our proposed framework, we conduct extensive ablation experiments on the Compositional Actions and TEmporal Reasoning (CATER) (Girdhar & Ramanan, 2019) benchmark and the Abstract Causal REasoning (ACRE) (Zhang et al., 2021) benchmark. We observe that the self-supervised representation learned by IS-CL indeed behave likes the symbolic representation, in the sense that when finetuned on CATER and ACRE, our learned representation achieves competitive or better generalization performance when compared with the frameworks that use explicit object-centric representation. Intriguingly, we observe that the network inductive biases, such as the number of slot tokens per image, play an important role on transfer learning performance: On both datasets, we observe that a small number of slot tokens per image (1 for CATER and 4 for ACRE) lead to the best transfer learning performance on visual reasoning tasks. To the best of our knowledge, our proposed framework is the first to achieve competitive performance on CATER and ACRE without the need to construct explicit symbolic representation from visual inputs. In summary, our paper makes the following two main contributions: First, unlike common assumptions made by neuro-symbolic approaches, we demonstrate that compositional generalization for visual reasoning can be achieved with end-to-end neural networks and implicit symbolic representations. Second, we propose a self-supervised representation learning framework IS-CL, to learn implicit symbolic representation with general-purpose Transformer neural networks. As a byproduct, we show that the learned representation achieves competitive performance on the challenging CATER and ACRE visual reasoning benchmarks. The code and pretrained checkpoints will be released upon paper acceptance. 2 RELATED WORK Neural Network Pretraining. We have collectively made huge progress towards building unified learning frameworks for a wide range of tasks, including natural language understanding (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Liu et al., 2019), visual recognition (Kokkinos, 2017; Kendall et al., 2018; Zamir et al., 2018; Ghiasi et al., 2021), and multimodal perception (Jaegle et al., 2021; Sun et al., 2019; Likhosherstov et al., 2021; Girdhar et al., 2022; Alayrac et al., 2022). As this pretraining-adaptation learning paradigm gains momentum, researchers at Stanford (Bommasani et al., 2021) have even coined the term “foundation models” to refer to these pretrained neural networks. Unfortunately, most of the “foundation models” for visual data focus on perception tasks, such as object classification, detection, or image captioning. Despite improved empirical performance on the visual question answering task (Hudson & Manning, 2019; Antol et al., 2015; Zellers et al., 2019), visual reasoning remains challenging when measured on more controlled benchmarks that require compositional generalization and causal learning (Zhang et al., 2021; Girdhar & Ramanan, 2019; Chen et al., 2022). It is commonly believed that symbolic or neurosymbolic methods (Mao et al., 2019; Yi et al., 2018; Lake & Baroni, 2018; Andreas, 2019), as opposed to the general-purpose neural networks, are required to achieve generalizable visual reasoning Yi et al. (2019); Zhang et al. (2021). To our knowledge, our proposed framework is the first to demonstrate the effectiveness of implicit symbolic representation on these visual reasoning benchmarks. Self-supervised Learning from Images and Videos. Self-supervised learning methods aim to learn strong visual representations from unlabelled datasets using pre-text tasks. Pre-text tasks were initially hand-designed to incorporate visual priors (Doersch et al., 2015; Zhang et al., 2016; Caron et al., 2018). Subsequent works used contrastive formulations which encourage different augmented views of the same input to map to the same feature representation, whilst preventing the model from collapsing to trivial solutions (Oord et al., 2018; Chen et al., 2020; He et al., 2020; Grill et al., 2020; Akbari et al., 2021). Our work is most related to masked self-supervised approaches. Early works in this area used stacked autoencoders (Vincent et al., 2010) or inpainting tasks (Pathak et al., 2016) with convolutional networks. These approaches have seen a resurgence recently, inspired by BERT (Devlin et al., 2018) and vision transformers (Dosovitskiy et al., 2020). BEiT (Bao et al., 2022) encodes masked patches with discrete variational autoencoders and predicts these tokens. Masked Autoencoders (MAE) (He et al., 2022), on the other hand, simply regress to the pixel values of these tokens. Masked Feature Prediction (Wei et al., 2022) (MFP) also regresses to pixelwise targets, but feature transformations of them as opposed to the direct RGB values as MAE. MAE and MFP have also both been extended to video too (Tong et al., 2022; Feichtenhofer et al., 2022), and are shown to be effective in object detection Li et al. (2022). The video reconstruction objective is also based on masked autoencoding, however, the goal is to learn a compact “implicit symbolic” representation for reasoning as opposed to generic visual descriptors for recognition tasks. We confirm empirically that the proposed method outperforms MAE and VideoMAE pretraining methods by large margins on the CATER and ACRE benchmarks. Object-centric Representation for Reasoning. Most of the existing neuro-symbolic (Mao et al., 2019; Yi et al., 2018) and neural network (Ding et al., 2021) based visual reasoning frameworks require a “preprocessing” stage of symbolic representation construction, which often involves detecting and classifying objects and their attributes from image or video inputs. Our proposed framework aims to investigate the effectiveness of single-stage, end-to-end neural networks for visual reasoning, which is often more desirable than the two-stage frameworks for scenarios that require transfer learning or multi-task learning. In order to obtain the object-centric, or symbolic representation in the preprocessing stage, one can rely on a supervised object detector (Mao et al., 2019), such as Mask R-CNN (He et al., 2017). An alternative approach is to employ self-supervised objectives and learn low-level features that are correlated with objects, such as textures (Geirhos et al., 2018; Hermann et al., 2020; Olah et al., 2017), or objects themselves (Burgess et al., 2019; Locatello et al., 2020; Caron et al., 2021). In practice, supervised or self-supervised approaches for object detection and object-centric representation learning may suffer from the lack of supervised annotations, or the noisy object detection results. For example, Zhang et al. (2022) observed that object-centric representation is beneficial for transfer learning to temporal event classification only when the ground truth object detections are used. 3 METHOD We now introduce the proposed implicit symbolic concept learning (IS-CL) framework. We follow the pretraining and transfer learning paradigm: During pretraining (Figure 2), we task a shared image encoder to output patch-level visual embeddings along with a small set of slot tokens that compress the image’s information. The pretraining objective is masked autoencoding (MAE) for unlabeled video frames, namely reconstructing the pixel values for a subset of “masked” image patches, given the “unmasked” image patches as context. Compared to the standard MAE for images (He et al., 2022), the image decoder has access to two additional types of context information: (1) The encoded patch embedding from the unmasked image patches of the neighboring frames; (2) The encoded slot tokens from a subset of context frames. The context information is encoded and propagated by a temporal transformer network. To successfully reconstruct a masked frame, the image encoder must learn a compact representation of the full image via the slot tokens, and the temporal transformer has to learn to capture object permenance and temporal dynamics. During transfer learning (Figure 3), the image decoder can be discarded, and only the image encoder and temporal transformer need to be transferred. The inputs to the temporal transformer are the slot tokens encoded from individual, unmasked video frames. We consider the full finetuning strategy where the weights of both the newly added task decoder (e.g. a linear classifier), and the pretrained image and temporal transformers are updated during transfer learning. Image Encoder: We adopt the Vision Transformer (ViT) backbone to encode each image independently: An input image is broken into non-overlapping patches of 16⇥16 pixels, which are then linearly projected into patch embeddings as inputs to the transformer encoder. Spatial information is preserved by sinusoidal positional encodings. We use the standard ViT-Base configuration which has 12 Transformer encoder layers. Each layer has hidden size of 768, MLP projection size of 3072, and 12 attention heads. During pretraining, a subset of video frames are spatially masked randomly given a masking ratio. As illustrated in Figure 2, only the unmasked image patches are fed into the ViT-B encoder. For context frames and during transfer learning, all image patches are provided as inputs to the image encoder. Slot Tokens: In the seminal work by Locatello et al. (2020), slot tokens are defined as the representational bottleneck in an image autoencoder, where the slot representations are iteratively updated with a GRU after the slots attend to the visual inputs in each iteration. We borrow their terminology, and also use slots to denote the representational bottleneck which we hope to encode symbolic, or object-centric information. We generalize their slot update rules by: (1) iteratively updating the input representation from raw pixels to visual representation encoded by the Transformer encoder (ViT); (2) replacing cross-attention with multi-headed self-attention; (3) using MLP layers with untied weights to update the intermediate slot representation as opposed to a shared GRU network. These two modifications allow us to implement “slot attention” directly with a Transformer encoder, simply by prepending slot tokens as additional inputs to the encoder (similar to [CLS] tokens). The initial slot embeddings at the input of the visual encoder are implemented as a learnable embedding lookup table. To compare the effectiveness of different methods to aggregate “slot” information, we also explore single-headed soft attention and Gumbel-max attention as used by Xu et al. (2022). Temporal Transformer: To propagate temporal information across frames, we use another transformer encoder (with fewer layers than the ViT-B image encoder) which takes the tokens encoded by the image encoder as its inputs. During pretraining, the slot tokens from context frames, along with the unmasked patch tokens from the query frames are concatenated together and fed into the temporal transformer. For each query image, the temporal transformer outputs its corresponding unmasked patch tokens contextualized from both the unmasked patches from neighboring query frames and the slot tokens from context frames. The contextualized patches are then fed into the image decoder to compute the reconstruction loss. To preserve temporal position information, we use learned positional embeddings (implemented with an embedding lookup table). During transfer learning, the temporal transformer takes the slot tokens encoded by the image encoder as its inputs. Putting the image encoder and the temporal transformer together, the overall video encoder used for transfer learning can be viewed as an factorized space-time encoder proposed by Arnab et al. (2021). It is more parameter-efficient than the vanilla video vision transformer used by Tong et al. (2022). Image Decoder for Pre-training: We use the same image decoder as in (He et al., 2022). As illustrated in Figure 2, the query images are decoded independently given the contextualized unmasked patch tokens. The image decoder is implemented with another transformer, where masked patch tokens are appended to the contextualized unmasked patch tokens as inputs to the image decoder. Sinusoidal positional encodings are used to indicate the spatial locations of individual patch tokens. We use the same number of layers, hidden size, and other hyperparameters as recommended by He et al. (2022). For pre-training purpose, we use mean squared error to measure the distance between the original query image patches and the reconstructed patches. Transfer Learning: As the goal of pre-training is to learn the slot tokens which we hope to compress an input image into several implicitly symbolic tokens, we only ask the image encoder to generate the slot tokens during finetuning (Figure 3), which are fed to the temporal transformer as its inputs. We then average pool the output tokens of the temporal transformer and add a task-specific decoder to make predictions. Both benchmarks used in our experiments can be formulated as multi-class classification: For CATER, the goal is to predict the final location of the golden snitch (Figure 4 top), where the location is quantized into one of the 6⇥6 positions; For ACRE, the goal is to predict whether the platform will activate, not activate, or undetermined given a query scenario (Figure 4 bottom). We hence use linear classifiers as the task-specific decoders and the standard softmax cross-entropy for transfer learning. 4 EXPERIMENTS We present results on CATER (Girdhar & Ramanan, 2019) and ACRE (Zhang et al., 2021). 4.1 EXPERIMENTAL SETUP Benchmarks: In the classic “shell game”, a ball is placed under a cup and shuffled with other empty cups on a flat surface; then, the objective is to determine which cup in the final shuffled configuration contains the ball. Inspired by this, CATER is a dataset composed of videos of CLEVR (Johnson et al., 2017) objects as they move around the scene. A special golden ball, called the “snitch”, is present in each video, and the associated reasoning task is to determine the snitch’s position at the final frame. Object locations in the CATER dataset are denoted by positions on an invisible 6-by-6 grid; therefore, in essence, the CATER task boils down to a 36-way classification problem. Solving this task is complicated by the fact that larger objects can visually occlude smaller ones, and certain objects can be picked up and placed down to explicitly cover other objects; when an object is covered, it changes position in consistence with the larger object that covers it. Therefore, in order to solve the task successfully, a model must learn to reason not only about objects and movement, but also about object permanence, long-term occlusions, and recursive covering relationships. The CATER dataset features a split where the camera is statically fixed to a particular angle and position throughout the videos, as well as a moving camera split where the viewing angle is able to change over time. We use the static split for evaluation. Each video has 300 frames. A visualization of a CATER video and the associated snitch localization task is shown in Figure 4 (top). The ACRE dataset tests a model’s ability to understand and discover causal relationships. The construction of the dataset is motivated by the Blicket experiment in developmental psychology (Gopnik & Sobel, 2000), where there is a platform as well as many distinct objects, some of which contain the “Blicketness” property. When at least one object with the “Blicketness” property is placed on the platform, music will be played; otherwise, the platform will maintain silence. Given a few context demonstrations of different object combinations, as well as the resulting effect, young children have been shown to successfully infer which objects contain the “Blicketness” property, and which combinations would cause the platform to play music. In ACRE, the platform is represented by a large pink block that either glows or remains dim depending on the combination of CLEVR objects placed on it. Given six evidence frames of objects placed on the platform, the objective of the reasoning task is to determine the effect a query frame, containing a potentially novel object combination, would have on the platform. Possible answers include lighting up the platform, keeping the platform dim, or unable to be determined with the given evidence frames. A visualization of an example ACRE sample is shown in the bottom row of Figure 4 (bottom). Pretraining data: We use the unlabeled videos from the training and validation splits of the CATER dataset for pretraining. Both the static and moving camera splits are used, which contains 9,304 videos in total. In our experiments, we observe that ACRE requires higher resolution inputs during pretraining and finetuning. Our default preprocessing setup is to randomly sample 32 frames of 64⇥64 for pretraining checkpoints to be transferred to CATER, and 16 frames of 224⇥224 for pretraining checkpoints to be transferred to ACRE. The randomly sampled frames are sorted to preserve the arrow of time information. No additional data augmentations are performed. Transfer learning: For CATER, we evaluate on the static split which has 3,065 training, 768 validation, and 1645 test examples. We select the hyperparameters based on the validation performance, then use both training and validation data to train the model to be evaluated on the test split. By de- fault, we use 100 randomly sampled frames of 64⇥64 during training, and 100 uniformly sampled frames of stride 3 during evaluation. For ACRE, we explore all three splits, all of which contain 24,000 training, 8,000 validation, and 8,000 test examples. We again use the validation set to select hyperparameters and use both training and validation to obtain the models evaluated on the test split. We use all seven frames of 224⇥224 during training and evaluation. Default hyperparameters: We use Adam optimizer for pretraining at learning rate of 10 3, and AdamW optimizer for transfer learning at learning rate of 5 ⇥ 10 5. The pretraining checkpoints are trained from scratch for 1,000 epochs at batch size of 256. For transfer learning, we finetune the pretrained checkpoints for 500 epochs at batch size of 512. All experiments are performed on TPU with 32 cores. Below we study the impact of several key model hyperparameters. 4.2 ABLATION STUDY We use CATER for ablation study in Table 1, and reuse the optimal hyperparameters in ACRE experiments. The impact of the number of slot tokens for ACRE is studied separately in Table 2. Masking ratio: Contrary to the large masking ratio employed in vanilla MAE, we found that the optimal masking ratio was 37.5% on downstream CATER accuracy. This is perhaps due to the fact that CATER is designed to test “compositional generalization”, and so the spatial context provides less information than in natural images and video. Number of Total Frames and Context Frames: We also study the impact of the number of frames the implicit symbolic concept learner is pretrained on, and find the best performance on 32 frames. Fixing the total number of pretraining frames, we then ablate over the number of context frames, which are the frames from which slot representations are generated. When 0 context frames are used, we essentially utilize only patch-level representations to perform reconstruction with the temporal transformer (simulating a per-frame MAE followed by a temporal transformer). We find that the best performance is achieved with 8 context frames, which balances the number of slot representations with patch-level representations. Number of Slot Tokens: Another useful ablation is on the impact of the number of slots used for CATER and ACRE. In CATER, we find that only 1 slot token per frame is enough to solve the reasoning task. We believe that this may be due to how the reasoning objective of CATER is designed: to successfully perform snitch localization, the model need only maintain an accurate prediction of where the snitch actually or potentially is, and can ignore more detailed representation of other objects in the scene. Under the hypothesis that the slot tokens represent symbols, perhaps the singular slot token is enough to contain the snitch location. On the other hand, when ablating over the number of tokens for the ACRE task (Table 3), we find that a higher number of tokens is beneficial for reasoning performance. This can potentially be explained by the need to model multiple objects across evidence frames in order to solve the final query; under our belief that slot tokens are encoding symbols, multiple may be needed in order to achieve the best final performance. Slot Pooling Layer and Method: We ablate over which layer to pool over to generate the slot tokens. The patch tokens are discarded after the pooling layer, and only the slot tokens are further processed by the additional Transformer encoder layers. As expected, it is desirable to use all image encoder layers to process both slot and patch tokens. Additionally, we also study the impact of slot pooling method, and observe that adding additional single-headed soft attention and Gumbel-max attention are outperformed by simply using the slot tokens directly. 4.3 COMPARISON TO THE STATE-OF-THE-ART Table 4 compares the result of IS-CL against other state-of-the-art models on CATER snitch localization. We also compare IS-CL on ACRE against other existing models in Table 5. We pretrain MAE and VideoMAE ourselves on the same pretraining dataset and searched for their corresponding optimal hyperparameters. We observe that the spacetime ViViT used by VideoMAE leads to collapsed training, and modified it to use factorized encoder. Other results are cited from the published results. IS-CL achieves the best performance among the approaches that do not dependent on explicit object-centric representation, it also achieves overall state-of-the-art performance on the comp and iid splits of ACRE. 5 CONCLUSION AND FUTURE WORK In this work we propose the implicit symbolic concept learner (IS-CL) framework, which trains a neural network end-to-end to solve complex visual reasoning tasks, without explicitly constructing an object-centric representation. IS-CL learns such implicit symbolic representations as slot embeddings in a pretraining step through a self-supervised video reconstruction objective via masking. We observe the exciting results that the learned representation behave like their symbolic counterparts, when measured on compositional generalization performance on CATER and ACRE benchmarks. Future work includes probing experiments to understand the information encoded by the slot tokens, and applying IS-CL to large-scale natural image and video datasets.
1. What is the focus and contribution of the paper regarding self-supervised learning for compositional scene representation? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to capture temporal dynamics and encourage object discovery? 3. How does the reviewer assess the quality, clarity, novelty, and reproducibility of the paper's content? 4. What are the differences and similarities between the presented approach and prior works such as ALOE, especially in terms of their use of representations and object-centric models? 5. Do you have any concerns or suggestions regarding the paper's experiments, particularly in expanding beyond synthetic datasets and exploring more diverse real-world data?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper A self-supervised training over video is proposed for learning compositional scene representations. The paper explores the approach on the CATER and ACRE datasets (video versions of CLEVR), and show its benefits in the domain of visual reasoning and question answering over these videos. Strengths And Weaknesses Strengths: Self-supervised: The approach is self-supervised and so doesn’t require annotations like alternative approaches for building compositional scene representations such as object detectors etc. dynamics for object-centered representations: The approach encourages temporal dynamics to be captured in the representations, by using a self-supervised approach over videos. This is an important and still underexplored signal in the domain of self-supervised compositional learning that could greatly help discover objects. Compositional Generalization: The approach explores and show good results on compositional and systematic generalization, which gives good indication that the representations learned indeed disentangle the dimensions needed for the downstream reasoning tasks. Extensive set of experiments on CLEVR-based data: see quality point in the question below. Weaknesses: Experiments on synthetic datasets only: The model is explored on CLEVR-based datasets only. While CLEVR-based data is definitely a great and suitable for exploring the presented approach, I would strongly encourage the authors to explore the approach either on new synthetic multi-object datasets with greater realism and diversity. Clarity, Quality, Novelty And Reproducibility Quality: The paper has concrete motivation (the benefits of self-supervised approaches compared to stronger object-attribute annotations and the potential of temporal dynamics to encourage object discovery) and explores an important domain. While it would be good to experiments on more datasets, the ones presented are extensive, and explore multiple axes of comspositional generalization, performance on downstream reasoning tasks, and sensitivity to variations and ablations. Clarity: The paper is well-written, the idea is clearly presented and it is accompanied with useful diagrams and visualizations. Novelty: The novelty of the idea is a bit limited, as there was a prior paper called ALOE (Attention over learned object embeddings enables complex visual reasoning) that, like the submitted paper, also used a self-supervised approach with a masked loss over frames and a transformer architecture, and it explores it over both on CATER and ACRE as done here as well as CLEVERER. There is a difference between the approaches where ALOE uses representations extracted from Monet while the approach here uses ViT, but from prior experience with working on the CLEVR dataset in particular, I think prior models shown with attention they can in a quite straightforward manner manage to pinpoint specific objects from the raw 2d image representation, when coupled with the right losses and overall model and settings, so the new component in the paper doesn’t substantially contribute on addressing an unsolved technical challenge. ViT for object-centric on CLEVR vs. real-world: This is especially straightforward given that the paper explores the model over the CLEVR dataset, since they use ViT tokens to represent slots, and in CLEVR with choosing the right resolution, they could correspond approximately 1:1 to CLEVR objects – since they cover local convex regions, compared e.g. to more complicated objects or to non-object real-world regions such as sky etc. ViT vs Monet/sparser approaches: It’s unclear to me whether ViT would the ideal approach in discovering objects on more general data, or since considering each slot on a dense 2D map to be an “object” leads to too many false-positive objects, and may struggle to find good correspondences on other datasets. This is a disadvantage compared to Monet used in ALOE, and there are also other object-centered models that discover sparser more compact representations such as Slot Attention, SAVI (extending slots attention to video and explores more diverse video datasets and also a bit of real-world robotic data), and the GroupViT model – which works very well on diverse real-world images. Claim correctness in introduction: Furthermore, I find the conceptual distinction the paper tries to make in the introduction between ALOE which uses Monet vs this paper which uses ViT not compelling: the paper says the former is based on pre-processed extracted objects while the latter train the ViT together with the model, but Monet is an unsupervised approach too and one could think of training it together with the frame prediction explored in ALOE. Results: The quantitative results of the paper are in line with prior works but not surpassing them and in particular not surpassing ALOE. Combined together with the novelty issue compared to ALOE that’s a main weakness of the paper.
ICLR
Title Towards Learning Implicit Symbolic Representation for Visual Reasoning Abstract Visual reasoning tasks are designed to test a learning algorithm’s capability to infer causal relationships, discover object interactions, and understand temporal dynamics, all from visual cues. It is commonly believed that to achieve compositional generalization on visual reasoning, an explicit abstraction of the visual scene must be constructed; for example, object detection can be applied to the visual input to produce representations that are then processed by a neural network or a neuro-symbolic framework. We demonstrate that a simple and general self-supervised approach is able to learn implicit symbolic representations with general-purpose neural networks, enabling the end-to-end learning of visual reasoning directly from raw visual inputs. Our proposed approach “compresses” each frame of a video into a small set of tokens with a transformer network. The self-supervised learning objective is to reconstruct each image based on the compressed temporal context. To minimize the reconstruction loss, the network must learn a compact representation for each image, as well as capture temporal dynamics and object permanence from temporal context. We evaluate the proposed approach on two visual reasoning benchmarks, CATER and ACRE. We observe that self-supervised pretraining is essential to achieve compositional generalization for our end-to-end trained neural network, and our proposed method achieves on par or better performance compared to recent neuro-symbolic approaches that often require additional object-level supervision. 1 INTRODUCTION This paper investigates if an end-to-end trained neural network is able to solve challenging visual reasoning tasks (Zhang et al., 2021; Girdhar & Ramanan, 2019; Yi et al., 2019) that involve inferring causal relationships, discovering object relations, and capturing temporal dynamics. A prominent approach (Shamsian et al., 2020) for visual reasoning is to construct a structured and interpretable representation from the visual inputs, and then apply symbolic programs (Mao et al., 2019) or neural networks (Ding et al., 2021) to solve the reasoning task. Despite their appealing properties, such as being interpretable and easier to inject expert knowledge into the learning framework, it is practically challenging to determine what types of symbols to use and how to detect them reliably from visual data. In fact, the suitable symbolic representation for a single scene may differ significantly across different tasks: the representation for modeling a single human’s kinematics (e.g. with body parts and joints) is unlikely to be the same as that for modeling group social behaviors (e.g. each pedestrian can be viewed as a whole entity). With the success of unified neural frameworks for multi-task learning (Bommasani et al., 2021), it is desirable to have a unified input interface (e.g. raw pixels) and have the neural network learn to dynamically extract suitable representations for different tasks. However, how to learn distributed representation with a deep neural network that behaves and generalizes similarly to learning methods based on symbolic representation (Zhang et al., 2021) for visual reasoning remains an open problem. The key hypothesis we make in this paper is that a general-purpose neural network, such as Transformers (Vaswani et al., 2017), can be turned into an implicit symbolic concept learner with selfsupervised pre-training. For reasoning with image and video cues, the concepts are often organized as object-centric, as objects usually serve as the basic units in visual reasoning tasks. Our proposed approach is inspired by the success of self-supervised learning of object detectors with neural networks (Burgess et al., 2019; Locatello et al., 2020; Niemeyer & Geiger, 2021) and the emergence of object masks in self-supervised classification networks (Caron et al., 2021). It is also motivated by concept binding in neuroscience (Treisman, 1996; Roskies, 1999; Feldman, 2013) and in machine learning (Greff et al., 2020), where concept binding for raw visual inputs refers to the process of segregating and representing visual scenes into a collection of (distributed) concept representation, which can be composed and utilized to solve downstream recognition and reasoning tasks. The concepts are bound in an object-centric fashion, where attributes (e.g. colors, shapes, sizes) of the same objects are associated via dynamic information routing. Different from explicit symbolic representation, implicit symbolic representation via dynamic information binding in a neural network does not require predefining the concept vocabulary or the construction of concept classifiers. The implicit representation can also be “finetuned” directly on the target tasks, it does not suffer from the early commitment or loss of information issues which may happen when visual inputs are converted into symbols and frozen descriptors (e.g. via object detection and classification). Our proposed representation learning framework, implicit symbolic concept learner (IS-CL) consists of two main components: first, a single image is compressed into a small set of tokens with a neural network. This is achieved by a vision transformer (ViT) network (Dosovitskiy et al., 2020) with multiple “slot” tokens (e.g. the [CLS] token in ViT) that attend to the image inputs. Second, the slot tokens are provided as context information via a temporal transformer network for other images in the same video, where the goal is to perform video reconstruction via the masked autoencoding (He et al., 2022) objective with the temporal context. Despite its simplicity, the reconstruction objective motivates the emergence of two desired properties in the pretrained network: first, to provide context useful for video reconstruction, the image encoder must learn a compact representation of the scene with its slot tokens. Second, to utilize the context cues, the temporal transformer must learn to associate objects and their implicit representation across time (“implicit tracking”), and also capture the notion of object permanence – the existence of an object even when it is occluded from the visual observations. One intuitive way to view our proposed IS-CL framework is from the perspective of Slot Attention model by Locatello et al. (2020): Instead of using a shared slot attention module to iteratively refine the encoded tokens, our image encoder is implemented as a stack of Transformer encoder layers with dedicated “slot” tokens. This generalization enables us to directly transfer the pretrained implicit symbolic representation encoded by expressive ViT backbones directly to downstream reasoning tasks. To validate our proposed framework, we conduct extensive ablation experiments on the Compositional Actions and TEmporal Reasoning (CATER) (Girdhar & Ramanan, 2019) benchmark and the Abstract Causal REasoning (ACRE) (Zhang et al., 2021) benchmark. We observe that the self-supervised representation learned by IS-CL indeed behave likes the symbolic representation, in the sense that when finetuned on CATER and ACRE, our learned representation achieves competitive or better generalization performance when compared with the frameworks that use explicit object-centric representation. Intriguingly, we observe that the network inductive biases, such as the number of slot tokens per image, play an important role on transfer learning performance: On both datasets, we observe that a small number of slot tokens per image (1 for CATER and 4 for ACRE) lead to the best transfer learning performance on visual reasoning tasks. To the best of our knowledge, our proposed framework is the first to achieve competitive performance on CATER and ACRE without the need to construct explicit symbolic representation from visual inputs. In summary, our paper makes the following two main contributions: First, unlike common assumptions made by neuro-symbolic approaches, we demonstrate that compositional generalization for visual reasoning can be achieved with end-to-end neural networks and implicit symbolic representations. Second, we propose a self-supervised representation learning framework IS-CL, to learn implicit symbolic representation with general-purpose Transformer neural networks. As a byproduct, we show that the learned representation achieves competitive performance on the challenging CATER and ACRE visual reasoning benchmarks. The code and pretrained checkpoints will be released upon paper acceptance. 2 RELATED WORK Neural Network Pretraining. We have collectively made huge progress towards building unified learning frameworks for a wide range of tasks, including natural language understanding (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Liu et al., 2019), visual recognition (Kokkinos, 2017; Kendall et al., 2018; Zamir et al., 2018; Ghiasi et al., 2021), and multimodal perception (Jaegle et al., 2021; Sun et al., 2019; Likhosherstov et al., 2021; Girdhar et al., 2022; Alayrac et al., 2022). As this pretraining-adaptation learning paradigm gains momentum, researchers at Stanford (Bommasani et al., 2021) have even coined the term “foundation models” to refer to these pretrained neural networks. Unfortunately, most of the “foundation models” for visual data focus on perception tasks, such as object classification, detection, or image captioning. Despite improved empirical performance on the visual question answering task (Hudson & Manning, 2019; Antol et al., 2015; Zellers et al., 2019), visual reasoning remains challenging when measured on more controlled benchmarks that require compositional generalization and causal learning (Zhang et al., 2021; Girdhar & Ramanan, 2019; Chen et al., 2022). It is commonly believed that symbolic or neurosymbolic methods (Mao et al., 2019; Yi et al., 2018; Lake & Baroni, 2018; Andreas, 2019), as opposed to the general-purpose neural networks, are required to achieve generalizable visual reasoning Yi et al. (2019); Zhang et al. (2021). To our knowledge, our proposed framework is the first to demonstrate the effectiveness of implicit symbolic representation on these visual reasoning benchmarks. Self-supervised Learning from Images and Videos. Self-supervised learning methods aim to learn strong visual representations from unlabelled datasets using pre-text tasks. Pre-text tasks were initially hand-designed to incorporate visual priors (Doersch et al., 2015; Zhang et al., 2016; Caron et al., 2018). Subsequent works used contrastive formulations which encourage different augmented views of the same input to map to the same feature representation, whilst preventing the model from collapsing to trivial solutions (Oord et al., 2018; Chen et al., 2020; He et al., 2020; Grill et al., 2020; Akbari et al., 2021). Our work is most related to masked self-supervised approaches. Early works in this area used stacked autoencoders (Vincent et al., 2010) or inpainting tasks (Pathak et al., 2016) with convolutional networks. These approaches have seen a resurgence recently, inspired by BERT (Devlin et al., 2018) and vision transformers (Dosovitskiy et al., 2020). BEiT (Bao et al., 2022) encodes masked patches with discrete variational autoencoders and predicts these tokens. Masked Autoencoders (MAE) (He et al., 2022), on the other hand, simply regress to the pixel values of these tokens. Masked Feature Prediction (Wei et al., 2022) (MFP) also regresses to pixelwise targets, but feature transformations of them as opposed to the direct RGB values as MAE. MAE and MFP have also both been extended to video too (Tong et al., 2022; Feichtenhofer et al., 2022), and are shown to be effective in object detection Li et al. (2022). The video reconstruction objective is also based on masked autoencoding, however, the goal is to learn a compact “implicit symbolic” representation for reasoning as opposed to generic visual descriptors for recognition tasks. We confirm empirically that the proposed method outperforms MAE and VideoMAE pretraining methods by large margins on the CATER and ACRE benchmarks. Object-centric Representation for Reasoning. Most of the existing neuro-symbolic (Mao et al., 2019; Yi et al., 2018) and neural network (Ding et al., 2021) based visual reasoning frameworks require a “preprocessing” stage of symbolic representation construction, which often involves detecting and classifying objects and their attributes from image or video inputs. Our proposed framework aims to investigate the effectiveness of single-stage, end-to-end neural networks for visual reasoning, which is often more desirable than the two-stage frameworks for scenarios that require transfer learning or multi-task learning. In order to obtain the object-centric, or symbolic representation in the preprocessing stage, one can rely on a supervised object detector (Mao et al., 2019), such as Mask R-CNN (He et al., 2017). An alternative approach is to employ self-supervised objectives and learn low-level features that are correlated with objects, such as textures (Geirhos et al., 2018; Hermann et al., 2020; Olah et al., 2017), or objects themselves (Burgess et al., 2019; Locatello et al., 2020; Caron et al., 2021). In practice, supervised or self-supervised approaches for object detection and object-centric representation learning may suffer from the lack of supervised annotations, or the noisy object detection results. For example, Zhang et al. (2022) observed that object-centric representation is beneficial for transfer learning to temporal event classification only when the ground truth object detections are used. 3 METHOD We now introduce the proposed implicit symbolic concept learning (IS-CL) framework. We follow the pretraining and transfer learning paradigm: During pretraining (Figure 2), we task a shared image encoder to output patch-level visual embeddings along with a small set of slot tokens that compress the image’s information. The pretraining objective is masked autoencoding (MAE) for unlabeled video frames, namely reconstructing the pixel values for a subset of “masked” image patches, given the “unmasked” image patches as context. Compared to the standard MAE for images (He et al., 2022), the image decoder has access to two additional types of context information: (1) The encoded patch embedding from the unmasked image patches of the neighboring frames; (2) The encoded slot tokens from a subset of context frames. The context information is encoded and propagated by a temporal transformer network. To successfully reconstruct a masked frame, the image encoder must learn a compact representation of the full image via the slot tokens, and the temporal transformer has to learn to capture object permenance and temporal dynamics. During transfer learning (Figure 3), the image decoder can be discarded, and only the image encoder and temporal transformer need to be transferred. The inputs to the temporal transformer are the slot tokens encoded from individual, unmasked video frames. We consider the full finetuning strategy where the weights of both the newly added task decoder (e.g. a linear classifier), and the pretrained image and temporal transformers are updated during transfer learning. Image Encoder: We adopt the Vision Transformer (ViT) backbone to encode each image independently: An input image is broken into non-overlapping patches of 16⇥16 pixels, which are then linearly projected into patch embeddings as inputs to the transformer encoder. Spatial information is preserved by sinusoidal positional encodings. We use the standard ViT-Base configuration which has 12 Transformer encoder layers. Each layer has hidden size of 768, MLP projection size of 3072, and 12 attention heads. During pretraining, a subset of video frames are spatially masked randomly given a masking ratio. As illustrated in Figure 2, only the unmasked image patches are fed into the ViT-B encoder. For context frames and during transfer learning, all image patches are provided as inputs to the image encoder. Slot Tokens: In the seminal work by Locatello et al. (2020), slot tokens are defined as the representational bottleneck in an image autoencoder, where the slot representations are iteratively updated with a GRU after the slots attend to the visual inputs in each iteration. We borrow their terminology, and also use slots to denote the representational bottleneck which we hope to encode symbolic, or object-centric information. We generalize their slot update rules by: (1) iteratively updating the input representation from raw pixels to visual representation encoded by the Transformer encoder (ViT); (2) replacing cross-attention with multi-headed self-attention; (3) using MLP layers with untied weights to update the intermediate slot representation as opposed to a shared GRU network. These two modifications allow us to implement “slot attention” directly with a Transformer encoder, simply by prepending slot tokens as additional inputs to the encoder (similar to [CLS] tokens). The initial slot embeddings at the input of the visual encoder are implemented as a learnable embedding lookup table. To compare the effectiveness of different methods to aggregate “slot” information, we also explore single-headed soft attention and Gumbel-max attention as used by Xu et al. (2022). Temporal Transformer: To propagate temporal information across frames, we use another transformer encoder (with fewer layers than the ViT-B image encoder) which takes the tokens encoded by the image encoder as its inputs. During pretraining, the slot tokens from context frames, along with the unmasked patch tokens from the query frames are concatenated together and fed into the temporal transformer. For each query image, the temporal transformer outputs its corresponding unmasked patch tokens contextualized from both the unmasked patches from neighboring query frames and the slot tokens from context frames. The contextualized patches are then fed into the image decoder to compute the reconstruction loss. To preserve temporal position information, we use learned positional embeddings (implemented with an embedding lookup table). During transfer learning, the temporal transformer takes the slot tokens encoded by the image encoder as its inputs. Putting the image encoder and the temporal transformer together, the overall video encoder used for transfer learning can be viewed as an factorized space-time encoder proposed by Arnab et al. (2021). It is more parameter-efficient than the vanilla video vision transformer used by Tong et al. (2022). Image Decoder for Pre-training: We use the same image decoder as in (He et al., 2022). As illustrated in Figure 2, the query images are decoded independently given the contextualized unmasked patch tokens. The image decoder is implemented with another transformer, where masked patch tokens are appended to the contextualized unmasked patch tokens as inputs to the image decoder. Sinusoidal positional encodings are used to indicate the spatial locations of individual patch tokens. We use the same number of layers, hidden size, and other hyperparameters as recommended by He et al. (2022). For pre-training purpose, we use mean squared error to measure the distance between the original query image patches and the reconstructed patches. Transfer Learning: As the goal of pre-training is to learn the slot tokens which we hope to compress an input image into several implicitly symbolic tokens, we only ask the image encoder to generate the slot tokens during finetuning (Figure 3), which are fed to the temporal transformer as its inputs. We then average pool the output tokens of the temporal transformer and add a task-specific decoder to make predictions. Both benchmarks used in our experiments can be formulated as multi-class classification: For CATER, the goal is to predict the final location of the golden snitch (Figure 4 top), where the location is quantized into one of the 6⇥6 positions; For ACRE, the goal is to predict whether the platform will activate, not activate, or undetermined given a query scenario (Figure 4 bottom). We hence use linear classifiers as the task-specific decoders and the standard softmax cross-entropy for transfer learning. 4 EXPERIMENTS We present results on CATER (Girdhar & Ramanan, 2019) and ACRE (Zhang et al., 2021). 4.1 EXPERIMENTAL SETUP Benchmarks: In the classic “shell game”, a ball is placed under a cup and shuffled with other empty cups on a flat surface; then, the objective is to determine which cup in the final shuffled configuration contains the ball. Inspired by this, CATER is a dataset composed of videos of CLEVR (Johnson et al., 2017) objects as they move around the scene. A special golden ball, called the “snitch”, is present in each video, and the associated reasoning task is to determine the snitch’s position at the final frame. Object locations in the CATER dataset are denoted by positions on an invisible 6-by-6 grid; therefore, in essence, the CATER task boils down to a 36-way classification problem. Solving this task is complicated by the fact that larger objects can visually occlude smaller ones, and certain objects can be picked up and placed down to explicitly cover other objects; when an object is covered, it changes position in consistence with the larger object that covers it. Therefore, in order to solve the task successfully, a model must learn to reason not only about objects and movement, but also about object permanence, long-term occlusions, and recursive covering relationships. The CATER dataset features a split where the camera is statically fixed to a particular angle and position throughout the videos, as well as a moving camera split where the viewing angle is able to change over time. We use the static split for evaluation. Each video has 300 frames. A visualization of a CATER video and the associated snitch localization task is shown in Figure 4 (top). The ACRE dataset tests a model’s ability to understand and discover causal relationships. The construction of the dataset is motivated by the Blicket experiment in developmental psychology (Gopnik & Sobel, 2000), where there is a platform as well as many distinct objects, some of which contain the “Blicketness” property. When at least one object with the “Blicketness” property is placed on the platform, music will be played; otherwise, the platform will maintain silence. Given a few context demonstrations of different object combinations, as well as the resulting effect, young children have been shown to successfully infer which objects contain the “Blicketness” property, and which combinations would cause the platform to play music. In ACRE, the platform is represented by a large pink block that either glows or remains dim depending on the combination of CLEVR objects placed on it. Given six evidence frames of objects placed on the platform, the objective of the reasoning task is to determine the effect a query frame, containing a potentially novel object combination, would have on the platform. Possible answers include lighting up the platform, keeping the platform dim, or unable to be determined with the given evidence frames. A visualization of an example ACRE sample is shown in the bottom row of Figure 4 (bottom). Pretraining data: We use the unlabeled videos from the training and validation splits of the CATER dataset for pretraining. Both the static and moving camera splits are used, which contains 9,304 videos in total. In our experiments, we observe that ACRE requires higher resolution inputs during pretraining and finetuning. Our default preprocessing setup is to randomly sample 32 frames of 64⇥64 for pretraining checkpoints to be transferred to CATER, and 16 frames of 224⇥224 for pretraining checkpoints to be transferred to ACRE. The randomly sampled frames are sorted to preserve the arrow of time information. No additional data augmentations are performed. Transfer learning: For CATER, we evaluate on the static split which has 3,065 training, 768 validation, and 1645 test examples. We select the hyperparameters based on the validation performance, then use both training and validation data to train the model to be evaluated on the test split. By de- fault, we use 100 randomly sampled frames of 64⇥64 during training, and 100 uniformly sampled frames of stride 3 during evaluation. For ACRE, we explore all three splits, all of which contain 24,000 training, 8,000 validation, and 8,000 test examples. We again use the validation set to select hyperparameters and use both training and validation to obtain the models evaluated on the test split. We use all seven frames of 224⇥224 during training and evaluation. Default hyperparameters: We use Adam optimizer for pretraining at learning rate of 10 3, and AdamW optimizer for transfer learning at learning rate of 5 ⇥ 10 5. The pretraining checkpoints are trained from scratch for 1,000 epochs at batch size of 256. For transfer learning, we finetune the pretrained checkpoints for 500 epochs at batch size of 512. All experiments are performed on TPU with 32 cores. Below we study the impact of several key model hyperparameters. 4.2 ABLATION STUDY We use CATER for ablation study in Table 1, and reuse the optimal hyperparameters in ACRE experiments. The impact of the number of slot tokens for ACRE is studied separately in Table 2. Masking ratio: Contrary to the large masking ratio employed in vanilla MAE, we found that the optimal masking ratio was 37.5% on downstream CATER accuracy. This is perhaps due to the fact that CATER is designed to test “compositional generalization”, and so the spatial context provides less information than in natural images and video. Number of Total Frames and Context Frames: We also study the impact of the number of frames the implicit symbolic concept learner is pretrained on, and find the best performance on 32 frames. Fixing the total number of pretraining frames, we then ablate over the number of context frames, which are the frames from which slot representations are generated. When 0 context frames are used, we essentially utilize only patch-level representations to perform reconstruction with the temporal transformer (simulating a per-frame MAE followed by a temporal transformer). We find that the best performance is achieved with 8 context frames, which balances the number of slot representations with patch-level representations. Number of Slot Tokens: Another useful ablation is on the impact of the number of slots used for CATER and ACRE. In CATER, we find that only 1 slot token per frame is enough to solve the reasoning task. We believe that this may be due to how the reasoning objective of CATER is designed: to successfully perform snitch localization, the model need only maintain an accurate prediction of where the snitch actually or potentially is, and can ignore more detailed representation of other objects in the scene. Under the hypothesis that the slot tokens represent symbols, perhaps the singular slot token is enough to contain the snitch location. On the other hand, when ablating over the number of tokens for the ACRE task (Table 3), we find that a higher number of tokens is beneficial for reasoning performance. This can potentially be explained by the need to model multiple objects across evidence frames in order to solve the final query; under our belief that slot tokens are encoding symbols, multiple may be needed in order to achieve the best final performance. Slot Pooling Layer and Method: We ablate over which layer to pool over to generate the slot tokens. The patch tokens are discarded after the pooling layer, and only the slot tokens are further processed by the additional Transformer encoder layers. As expected, it is desirable to use all image encoder layers to process both slot and patch tokens. Additionally, we also study the impact of slot pooling method, and observe that adding additional single-headed soft attention and Gumbel-max attention are outperformed by simply using the slot tokens directly. 4.3 COMPARISON TO THE STATE-OF-THE-ART Table 4 compares the result of IS-CL against other state-of-the-art models on CATER snitch localization. We also compare IS-CL on ACRE against other existing models in Table 5. We pretrain MAE and VideoMAE ourselves on the same pretraining dataset and searched for their corresponding optimal hyperparameters. We observe that the spacetime ViViT used by VideoMAE leads to collapsed training, and modified it to use factorized encoder. Other results are cited from the published results. IS-CL achieves the best performance among the approaches that do not dependent on explicit object-centric representation, it also achieves overall state-of-the-art performance on the comp and iid splits of ACRE. 5 CONCLUSION AND FUTURE WORK In this work we propose the implicit symbolic concept learner (IS-CL) framework, which trains a neural network end-to-end to solve complex visual reasoning tasks, without explicitly constructing an object-centric representation. IS-CL learns such implicit symbolic representations as slot embeddings in a pretraining step through a self-supervised video reconstruction objective via masking. We observe the exciting results that the learned representation behave like their symbolic counterparts, when measured on compositional generalization performance on CATER and ACRE benchmarks. Future work includes probing experiments to understand the information encoded by the slot tokens, and applying IS-CL to large-scale natural image and video datasets.
1. What is the focus and contribution of the paper on visual reasoning tasks? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle various visual reasoning tasks without requiring specific object abstraction? 3. Do you have any concerns regarding the explanations provided in the paper for certain design choices and their impact on performance? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential drawbacks to the proposed framework that could be explored further?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed an end-to-end implicit symbolic representation learning framework for visual reasoning tasks. It wisely adopts slot tokens for its bottleneck information properties, masked autoencoding objective, and transformers to learn implicit representations in a self-supervised way. The learned implicit representation can later be applied to solve specific visual reasoning tasks with proper head fine-tuning with target data. Such a learning framework has a very advantage: it does not require specific object abstraction (e.g., detection mask) and thus serves more general purposes. Results on two common benchmarks (CATER and ACRE) show consistent improvements achieved by the proposed framework. Strengths And Weaknesses The studied direction is important, and the proposed method took a further step toward a general solution to visual reasoning. Figure 1 clearly demonstrates the difference and advantages between the proposed method and the previous approaches: the proposed framework can perform visual reasoning with neural networks and non-predefined implicit tokens. All the used components: slot token, MAE objective, and transformers (encoder, temporal) are reasonable. Overall the paper is easy to follow. Extensive experiments are conducted in CATER and ACRE datasets and show the proposed framework consistently outperforms the previous approaches. Ablation studies are conducted thoroughly around masking ratio, total frame numbers, context frame numbers, slot token numbers, slot pool layer, and slot pool methods. Many analyses and explanations in the current manuscript are intuitive and without supporting empirical evidence. For example, in "Number of Slot Tokens" section, experiments found 1 works the best in the CATER benchmark, and the author explained why: "the model need only maintain an accurate prediction of where the snitch actually or potentially is". If we add golden snitches up to 2/3/4 (if possible, or in other sims), will the best performance achieve by 2/3/4 implicit slot accordingly? or the best performance is still achieved by 1 slot. What will the performance change if we set the slot token number to 100? will the proposed framework lose the "representational bottleneck" properties and lead to drastic performance drops? In the transfer learning setting, the multi-class classification formulation for the goal of both CATER and ACRE tasks encodes strong human priors. The tested visual reasoning tasks are actually solved with specific designs for specific tasks in the end, while the studied implicit representations are uniform, and thus, the contribution did not weak a lot. Clarity, Quality, Novelty And Reproducibility Both the clarity and quality are good. The proposed framework may bring some fresh air to the community. The reproducibility is upon the code and pre-trained models released.
ICLR
Title Towards Learning Implicit Symbolic Representation for Visual Reasoning Abstract Visual reasoning tasks are designed to test a learning algorithm’s capability to infer causal relationships, discover object interactions, and understand temporal dynamics, all from visual cues. It is commonly believed that to achieve compositional generalization on visual reasoning, an explicit abstraction of the visual scene must be constructed; for example, object detection can be applied to the visual input to produce representations that are then processed by a neural network or a neuro-symbolic framework. We demonstrate that a simple and general self-supervised approach is able to learn implicit symbolic representations with general-purpose neural networks, enabling the end-to-end learning of visual reasoning directly from raw visual inputs. Our proposed approach “compresses” each frame of a video into a small set of tokens with a transformer network. The self-supervised learning objective is to reconstruct each image based on the compressed temporal context. To minimize the reconstruction loss, the network must learn a compact representation for each image, as well as capture temporal dynamics and object permanence from temporal context. We evaluate the proposed approach on two visual reasoning benchmarks, CATER and ACRE. We observe that self-supervised pretraining is essential to achieve compositional generalization for our end-to-end trained neural network, and our proposed method achieves on par or better performance compared to recent neuro-symbolic approaches that often require additional object-level supervision. 1 INTRODUCTION This paper investigates if an end-to-end trained neural network is able to solve challenging visual reasoning tasks (Zhang et al., 2021; Girdhar & Ramanan, 2019; Yi et al., 2019) that involve inferring causal relationships, discovering object relations, and capturing temporal dynamics. A prominent approach (Shamsian et al., 2020) for visual reasoning is to construct a structured and interpretable representation from the visual inputs, and then apply symbolic programs (Mao et al., 2019) or neural networks (Ding et al., 2021) to solve the reasoning task. Despite their appealing properties, such as being interpretable and easier to inject expert knowledge into the learning framework, it is practically challenging to determine what types of symbols to use and how to detect them reliably from visual data. In fact, the suitable symbolic representation for a single scene may differ significantly across different tasks: the representation for modeling a single human’s kinematics (e.g. with body parts and joints) is unlikely to be the same as that for modeling group social behaviors (e.g. each pedestrian can be viewed as a whole entity). With the success of unified neural frameworks for multi-task learning (Bommasani et al., 2021), it is desirable to have a unified input interface (e.g. raw pixels) and have the neural network learn to dynamically extract suitable representations for different tasks. However, how to learn distributed representation with a deep neural network that behaves and generalizes similarly to learning methods based on symbolic representation (Zhang et al., 2021) for visual reasoning remains an open problem. The key hypothesis we make in this paper is that a general-purpose neural network, such as Transformers (Vaswani et al., 2017), can be turned into an implicit symbolic concept learner with selfsupervised pre-training. For reasoning with image and video cues, the concepts are often organized as object-centric, as objects usually serve as the basic units in visual reasoning tasks. Our proposed approach is inspired by the success of self-supervised learning of object detectors with neural networks (Burgess et al., 2019; Locatello et al., 2020; Niemeyer & Geiger, 2021) and the emergence of object masks in self-supervised classification networks (Caron et al., 2021). It is also motivated by concept binding in neuroscience (Treisman, 1996; Roskies, 1999; Feldman, 2013) and in machine learning (Greff et al., 2020), where concept binding for raw visual inputs refers to the process of segregating and representing visual scenes into a collection of (distributed) concept representation, which can be composed and utilized to solve downstream recognition and reasoning tasks. The concepts are bound in an object-centric fashion, where attributes (e.g. colors, shapes, sizes) of the same objects are associated via dynamic information routing. Different from explicit symbolic representation, implicit symbolic representation via dynamic information binding in a neural network does not require predefining the concept vocabulary or the construction of concept classifiers. The implicit representation can also be “finetuned” directly on the target tasks, it does not suffer from the early commitment or loss of information issues which may happen when visual inputs are converted into symbols and frozen descriptors (e.g. via object detection and classification). Our proposed representation learning framework, implicit symbolic concept learner (IS-CL) consists of two main components: first, a single image is compressed into a small set of tokens with a neural network. This is achieved by a vision transformer (ViT) network (Dosovitskiy et al., 2020) with multiple “slot” tokens (e.g. the [CLS] token in ViT) that attend to the image inputs. Second, the slot tokens are provided as context information via a temporal transformer network for other images in the same video, where the goal is to perform video reconstruction via the masked autoencoding (He et al., 2022) objective with the temporal context. Despite its simplicity, the reconstruction objective motivates the emergence of two desired properties in the pretrained network: first, to provide context useful for video reconstruction, the image encoder must learn a compact representation of the scene with its slot tokens. Second, to utilize the context cues, the temporal transformer must learn to associate objects and their implicit representation across time (“implicit tracking”), and also capture the notion of object permanence – the existence of an object even when it is occluded from the visual observations. One intuitive way to view our proposed IS-CL framework is from the perspective of Slot Attention model by Locatello et al. (2020): Instead of using a shared slot attention module to iteratively refine the encoded tokens, our image encoder is implemented as a stack of Transformer encoder layers with dedicated “slot” tokens. This generalization enables us to directly transfer the pretrained implicit symbolic representation encoded by expressive ViT backbones directly to downstream reasoning tasks. To validate our proposed framework, we conduct extensive ablation experiments on the Compositional Actions and TEmporal Reasoning (CATER) (Girdhar & Ramanan, 2019) benchmark and the Abstract Causal REasoning (ACRE) (Zhang et al., 2021) benchmark. We observe that the self-supervised representation learned by IS-CL indeed behave likes the symbolic representation, in the sense that when finetuned on CATER and ACRE, our learned representation achieves competitive or better generalization performance when compared with the frameworks that use explicit object-centric representation. Intriguingly, we observe that the network inductive biases, such as the number of slot tokens per image, play an important role on transfer learning performance: On both datasets, we observe that a small number of slot tokens per image (1 for CATER and 4 for ACRE) lead to the best transfer learning performance on visual reasoning tasks. To the best of our knowledge, our proposed framework is the first to achieve competitive performance on CATER and ACRE without the need to construct explicit symbolic representation from visual inputs. In summary, our paper makes the following two main contributions: First, unlike common assumptions made by neuro-symbolic approaches, we demonstrate that compositional generalization for visual reasoning can be achieved with end-to-end neural networks and implicit symbolic representations. Second, we propose a self-supervised representation learning framework IS-CL, to learn implicit symbolic representation with general-purpose Transformer neural networks. As a byproduct, we show that the learned representation achieves competitive performance on the challenging CATER and ACRE visual reasoning benchmarks. The code and pretrained checkpoints will be released upon paper acceptance. 2 RELATED WORK Neural Network Pretraining. We have collectively made huge progress towards building unified learning frameworks for a wide range of tasks, including natural language understanding (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Liu et al., 2019), visual recognition (Kokkinos, 2017; Kendall et al., 2018; Zamir et al., 2018; Ghiasi et al., 2021), and multimodal perception (Jaegle et al., 2021; Sun et al., 2019; Likhosherstov et al., 2021; Girdhar et al., 2022; Alayrac et al., 2022). As this pretraining-adaptation learning paradigm gains momentum, researchers at Stanford (Bommasani et al., 2021) have even coined the term “foundation models” to refer to these pretrained neural networks. Unfortunately, most of the “foundation models” for visual data focus on perception tasks, such as object classification, detection, or image captioning. Despite improved empirical performance on the visual question answering task (Hudson & Manning, 2019; Antol et al., 2015; Zellers et al., 2019), visual reasoning remains challenging when measured on more controlled benchmarks that require compositional generalization and causal learning (Zhang et al., 2021; Girdhar & Ramanan, 2019; Chen et al., 2022). It is commonly believed that symbolic or neurosymbolic methods (Mao et al., 2019; Yi et al., 2018; Lake & Baroni, 2018; Andreas, 2019), as opposed to the general-purpose neural networks, are required to achieve generalizable visual reasoning Yi et al. (2019); Zhang et al. (2021). To our knowledge, our proposed framework is the first to demonstrate the effectiveness of implicit symbolic representation on these visual reasoning benchmarks. Self-supervised Learning from Images and Videos. Self-supervised learning methods aim to learn strong visual representations from unlabelled datasets using pre-text tasks. Pre-text tasks were initially hand-designed to incorporate visual priors (Doersch et al., 2015; Zhang et al., 2016; Caron et al., 2018). Subsequent works used contrastive formulations which encourage different augmented views of the same input to map to the same feature representation, whilst preventing the model from collapsing to trivial solutions (Oord et al., 2018; Chen et al., 2020; He et al., 2020; Grill et al., 2020; Akbari et al., 2021). Our work is most related to masked self-supervised approaches. Early works in this area used stacked autoencoders (Vincent et al., 2010) or inpainting tasks (Pathak et al., 2016) with convolutional networks. These approaches have seen a resurgence recently, inspired by BERT (Devlin et al., 2018) and vision transformers (Dosovitskiy et al., 2020). BEiT (Bao et al., 2022) encodes masked patches with discrete variational autoencoders and predicts these tokens. Masked Autoencoders (MAE) (He et al., 2022), on the other hand, simply regress to the pixel values of these tokens. Masked Feature Prediction (Wei et al., 2022) (MFP) also regresses to pixelwise targets, but feature transformations of them as opposed to the direct RGB values as MAE. MAE and MFP have also both been extended to video too (Tong et al., 2022; Feichtenhofer et al., 2022), and are shown to be effective in object detection Li et al. (2022). The video reconstruction objective is also based on masked autoencoding, however, the goal is to learn a compact “implicit symbolic” representation for reasoning as opposed to generic visual descriptors for recognition tasks. We confirm empirically that the proposed method outperforms MAE and VideoMAE pretraining methods by large margins on the CATER and ACRE benchmarks. Object-centric Representation for Reasoning. Most of the existing neuro-symbolic (Mao et al., 2019; Yi et al., 2018) and neural network (Ding et al., 2021) based visual reasoning frameworks require a “preprocessing” stage of symbolic representation construction, which often involves detecting and classifying objects and their attributes from image or video inputs. Our proposed framework aims to investigate the effectiveness of single-stage, end-to-end neural networks for visual reasoning, which is often more desirable than the two-stage frameworks for scenarios that require transfer learning or multi-task learning. In order to obtain the object-centric, or symbolic representation in the preprocessing stage, one can rely on a supervised object detector (Mao et al., 2019), such as Mask R-CNN (He et al., 2017). An alternative approach is to employ self-supervised objectives and learn low-level features that are correlated with objects, such as textures (Geirhos et al., 2018; Hermann et al., 2020; Olah et al., 2017), or objects themselves (Burgess et al., 2019; Locatello et al., 2020; Caron et al., 2021). In practice, supervised or self-supervised approaches for object detection and object-centric representation learning may suffer from the lack of supervised annotations, or the noisy object detection results. For example, Zhang et al. (2022) observed that object-centric representation is beneficial for transfer learning to temporal event classification only when the ground truth object detections are used. 3 METHOD We now introduce the proposed implicit symbolic concept learning (IS-CL) framework. We follow the pretraining and transfer learning paradigm: During pretraining (Figure 2), we task a shared image encoder to output patch-level visual embeddings along with a small set of slot tokens that compress the image’s information. The pretraining objective is masked autoencoding (MAE) for unlabeled video frames, namely reconstructing the pixel values for a subset of “masked” image patches, given the “unmasked” image patches as context. Compared to the standard MAE for images (He et al., 2022), the image decoder has access to two additional types of context information: (1) The encoded patch embedding from the unmasked image patches of the neighboring frames; (2) The encoded slot tokens from a subset of context frames. The context information is encoded and propagated by a temporal transformer network. To successfully reconstruct a masked frame, the image encoder must learn a compact representation of the full image via the slot tokens, and the temporal transformer has to learn to capture object permenance and temporal dynamics. During transfer learning (Figure 3), the image decoder can be discarded, and only the image encoder and temporal transformer need to be transferred. The inputs to the temporal transformer are the slot tokens encoded from individual, unmasked video frames. We consider the full finetuning strategy where the weights of both the newly added task decoder (e.g. a linear classifier), and the pretrained image and temporal transformers are updated during transfer learning. Image Encoder: We adopt the Vision Transformer (ViT) backbone to encode each image independently: An input image is broken into non-overlapping patches of 16⇥16 pixels, which are then linearly projected into patch embeddings as inputs to the transformer encoder. Spatial information is preserved by sinusoidal positional encodings. We use the standard ViT-Base configuration which has 12 Transformer encoder layers. Each layer has hidden size of 768, MLP projection size of 3072, and 12 attention heads. During pretraining, a subset of video frames are spatially masked randomly given a masking ratio. As illustrated in Figure 2, only the unmasked image patches are fed into the ViT-B encoder. For context frames and during transfer learning, all image patches are provided as inputs to the image encoder. Slot Tokens: In the seminal work by Locatello et al. (2020), slot tokens are defined as the representational bottleneck in an image autoencoder, where the slot representations are iteratively updated with a GRU after the slots attend to the visual inputs in each iteration. We borrow their terminology, and also use slots to denote the representational bottleneck which we hope to encode symbolic, or object-centric information. We generalize their slot update rules by: (1) iteratively updating the input representation from raw pixels to visual representation encoded by the Transformer encoder (ViT); (2) replacing cross-attention with multi-headed self-attention; (3) using MLP layers with untied weights to update the intermediate slot representation as opposed to a shared GRU network. These two modifications allow us to implement “slot attention” directly with a Transformer encoder, simply by prepending slot tokens as additional inputs to the encoder (similar to [CLS] tokens). The initial slot embeddings at the input of the visual encoder are implemented as a learnable embedding lookup table. To compare the effectiveness of different methods to aggregate “slot” information, we also explore single-headed soft attention and Gumbel-max attention as used by Xu et al. (2022). Temporal Transformer: To propagate temporal information across frames, we use another transformer encoder (with fewer layers than the ViT-B image encoder) which takes the tokens encoded by the image encoder as its inputs. During pretraining, the slot tokens from context frames, along with the unmasked patch tokens from the query frames are concatenated together and fed into the temporal transformer. For each query image, the temporal transformer outputs its corresponding unmasked patch tokens contextualized from both the unmasked patches from neighboring query frames and the slot tokens from context frames. The contextualized patches are then fed into the image decoder to compute the reconstruction loss. To preserve temporal position information, we use learned positional embeddings (implemented with an embedding lookup table). During transfer learning, the temporal transformer takes the slot tokens encoded by the image encoder as its inputs. Putting the image encoder and the temporal transformer together, the overall video encoder used for transfer learning can be viewed as an factorized space-time encoder proposed by Arnab et al. (2021). It is more parameter-efficient than the vanilla video vision transformer used by Tong et al. (2022). Image Decoder for Pre-training: We use the same image decoder as in (He et al., 2022). As illustrated in Figure 2, the query images are decoded independently given the contextualized unmasked patch tokens. The image decoder is implemented with another transformer, where masked patch tokens are appended to the contextualized unmasked patch tokens as inputs to the image decoder. Sinusoidal positional encodings are used to indicate the spatial locations of individual patch tokens. We use the same number of layers, hidden size, and other hyperparameters as recommended by He et al. (2022). For pre-training purpose, we use mean squared error to measure the distance between the original query image patches and the reconstructed patches. Transfer Learning: As the goal of pre-training is to learn the slot tokens which we hope to compress an input image into several implicitly symbolic tokens, we only ask the image encoder to generate the slot tokens during finetuning (Figure 3), which are fed to the temporal transformer as its inputs. We then average pool the output tokens of the temporal transformer and add a task-specific decoder to make predictions. Both benchmarks used in our experiments can be formulated as multi-class classification: For CATER, the goal is to predict the final location of the golden snitch (Figure 4 top), where the location is quantized into one of the 6⇥6 positions; For ACRE, the goal is to predict whether the platform will activate, not activate, or undetermined given a query scenario (Figure 4 bottom). We hence use linear classifiers as the task-specific decoders and the standard softmax cross-entropy for transfer learning. 4 EXPERIMENTS We present results on CATER (Girdhar & Ramanan, 2019) and ACRE (Zhang et al., 2021). 4.1 EXPERIMENTAL SETUP Benchmarks: In the classic “shell game”, a ball is placed under a cup and shuffled with other empty cups on a flat surface; then, the objective is to determine which cup in the final shuffled configuration contains the ball. Inspired by this, CATER is a dataset composed of videos of CLEVR (Johnson et al., 2017) objects as they move around the scene. A special golden ball, called the “snitch”, is present in each video, and the associated reasoning task is to determine the snitch’s position at the final frame. Object locations in the CATER dataset are denoted by positions on an invisible 6-by-6 grid; therefore, in essence, the CATER task boils down to a 36-way classification problem. Solving this task is complicated by the fact that larger objects can visually occlude smaller ones, and certain objects can be picked up and placed down to explicitly cover other objects; when an object is covered, it changes position in consistence with the larger object that covers it. Therefore, in order to solve the task successfully, a model must learn to reason not only about objects and movement, but also about object permanence, long-term occlusions, and recursive covering relationships. The CATER dataset features a split where the camera is statically fixed to a particular angle and position throughout the videos, as well as a moving camera split where the viewing angle is able to change over time. We use the static split for evaluation. Each video has 300 frames. A visualization of a CATER video and the associated snitch localization task is shown in Figure 4 (top). The ACRE dataset tests a model’s ability to understand and discover causal relationships. The construction of the dataset is motivated by the Blicket experiment in developmental psychology (Gopnik & Sobel, 2000), where there is a platform as well as many distinct objects, some of which contain the “Blicketness” property. When at least one object with the “Blicketness” property is placed on the platform, music will be played; otherwise, the platform will maintain silence. Given a few context demonstrations of different object combinations, as well as the resulting effect, young children have been shown to successfully infer which objects contain the “Blicketness” property, and which combinations would cause the platform to play music. In ACRE, the platform is represented by a large pink block that either glows or remains dim depending on the combination of CLEVR objects placed on it. Given six evidence frames of objects placed on the platform, the objective of the reasoning task is to determine the effect a query frame, containing a potentially novel object combination, would have on the platform. Possible answers include lighting up the platform, keeping the platform dim, or unable to be determined with the given evidence frames. A visualization of an example ACRE sample is shown in the bottom row of Figure 4 (bottom). Pretraining data: We use the unlabeled videos from the training and validation splits of the CATER dataset for pretraining. Both the static and moving camera splits are used, which contains 9,304 videos in total. In our experiments, we observe that ACRE requires higher resolution inputs during pretraining and finetuning. Our default preprocessing setup is to randomly sample 32 frames of 64⇥64 for pretraining checkpoints to be transferred to CATER, and 16 frames of 224⇥224 for pretraining checkpoints to be transferred to ACRE. The randomly sampled frames are sorted to preserve the arrow of time information. No additional data augmentations are performed. Transfer learning: For CATER, we evaluate on the static split which has 3,065 training, 768 validation, and 1645 test examples. We select the hyperparameters based on the validation performance, then use both training and validation data to train the model to be evaluated on the test split. By de- fault, we use 100 randomly sampled frames of 64⇥64 during training, and 100 uniformly sampled frames of stride 3 during evaluation. For ACRE, we explore all three splits, all of which contain 24,000 training, 8,000 validation, and 8,000 test examples. We again use the validation set to select hyperparameters and use both training and validation to obtain the models evaluated on the test split. We use all seven frames of 224⇥224 during training and evaluation. Default hyperparameters: We use Adam optimizer for pretraining at learning rate of 10 3, and AdamW optimizer for transfer learning at learning rate of 5 ⇥ 10 5. The pretraining checkpoints are trained from scratch for 1,000 epochs at batch size of 256. For transfer learning, we finetune the pretrained checkpoints for 500 epochs at batch size of 512. All experiments are performed on TPU with 32 cores. Below we study the impact of several key model hyperparameters. 4.2 ABLATION STUDY We use CATER for ablation study in Table 1, and reuse the optimal hyperparameters in ACRE experiments. The impact of the number of slot tokens for ACRE is studied separately in Table 2. Masking ratio: Contrary to the large masking ratio employed in vanilla MAE, we found that the optimal masking ratio was 37.5% on downstream CATER accuracy. This is perhaps due to the fact that CATER is designed to test “compositional generalization”, and so the spatial context provides less information than in natural images and video. Number of Total Frames and Context Frames: We also study the impact of the number of frames the implicit symbolic concept learner is pretrained on, and find the best performance on 32 frames. Fixing the total number of pretraining frames, we then ablate over the number of context frames, which are the frames from which slot representations are generated. When 0 context frames are used, we essentially utilize only patch-level representations to perform reconstruction with the temporal transformer (simulating a per-frame MAE followed by a temporal transformer). We find that the best performance is achieved with 8 context frames, which balances the number of slot representations with patch-level representations. Number of Slot Tokens: Another useful ablation is on the impact of the number of slots used for CATER and ACRE. In CATER, we find that only 1 slot token per frame is enough to solve the reasoning task. We believe that this may be due to how the reasoning objective of CATER is designed: to successfully perform snitch localization, the model need only maintain an accurate prediction of where the snitch actually or potentially is, and can ignore more detailed representation of other objects in the scene. Under the hypothesis that the slot tokens represent symbols, perhaps the singular slot token is enough to contain the snitch location. On the other hand, when ablating over the number of tokens for the ACRE task (Table 3), we find that a higher number of tokens is beneficial for reasoning performance. This can potentially be explained by the need to model multiple objects across evidence frames in order to solve the final query; under our belief that slot tokens are encoding symbols, multiple may be needed in order to achieve the best final performance. Slot Pooling Layer and Method: We ablate over which layer to pool over to generate the slot tokens. The patch tokens are discarded after the pooling layer, and only the slot tokens are further processed by the additional Transformer encoder layers. As expected, it is desirable to use all image encoder layers to process both slot and patch tokens. Additionally, we also study the impact of slot pooling method, and observe that adding additional single-headed soft attention and Gumbel-max attention are outperformed by simply using the slot tokens directly. 4.3 COMPARISON TO THE STATE-OF-THE-ART Table 4 compares the result of IS-CL against other state-of-the-art models on CATER snitch localization. We also compare IS-CL on ACRE against other existing models in Table 5. We pretrain MAE and VideoMAE ourselves on the same pretraining dataset and searched for their corresponding optimal hyperparameters. We observe that the spacetime ViViT used by VideoMAE leads to collapsed training, and modified it to use factorized encoder. Other results are cited from the published results. IS-CL achieves the best performance among the approaches that do not dependent on explicit object-centric representation, it also achieves overall state-of-the-art performance on the comp and iid splits of ACRE. 5 CONCLUSION AND FUTURE WORK In this work we propose the implicit symbolic concept learner (IS-CL) framework, which trains a neural network end-to-end to solve complex visual reasoning tasks, without explicitly constructing an object-centric representation. IS-CL learns such implicit symbolic representations as slot embeddings in a pretraining step through a self-supervised video reconstruction objective via masking. We observe the exciting results that the learned representation behave like their symbolic counterparts, when measured on compositional generalization performance on CATER and ACRE benchmarks. Future work includes probing experiments to understand the information encoded by the slot tokens, and applying IS-CL to large-scale natural image and video datasets.
1. What is the main contribution of the paper regarding compositional visual reasoning in videos? 2. What are the strengths of the proposed approach, particularly in terms of self-supervised pretraining and transfer learning? 3. What are the weaknesses of the paper, especially regarding the claims of implicit symbolic representation and the lack of sufficient study on the representation itself? 4. Do you have any concerns about the experimental results and their comparison with other approaches? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method for compositional visual reasoning in videos, based on recent advances in self-supervised pretraining. The method first trains a spacial-temporal transformer that reconstructs the video frames under the masked autoencoder paradigm, then performs reasoning via transfer learning. The authors claim and demonstrate via extensive experimental study on the CATER and ACRE datasets, that a generalizable compact representation is learnt, superior to other approaches based on object-centric representation. Ablation studies are conducted. Strengths And Weaknesses Strength: This paper proposes a powerful end-to-end network for compositional visual reasoning, and achieved nice results on two datasets that were regarded as (and designed to be) challenging for end-to-end systems. The paper also demonstrates self-supervised pre-training can lead to useful representations for visual reasoning, which sheds light to further research. The paper is well written and backed by rich experimental results. Weakness The paper claims that the model learns "implicit symbolic representation" without a clear definition and elaboration. What is the difference between "symbolic representation" vs. a regular latent representation learned by other self-supervised model? Moreover, there isn't sufficient study on the representation itself other than transfer learning results to back these claims. How does the same architecture perform without the representation? The experimental results have not established, decisive evidence that end-to-end method is superior to object-centric representations. Specifically, under same supervision, the model does not significantly outperform ALOE and ALOE++. The paper also does not provide evaluation on the CLEVRER dataset, a video reasoning benchmark where object ALOE performs nicely. Minor: The data flow in figures goes upwards instead of downwards, which might cause unnecessary confusions to readers. Clarity, Quality, Novelty And Reproducibility The paper is well written with sufficient discussion on related works, even though the presentation of figures can be improved. The authors claim that code will be released upon acceptance of the manuscript.
ICLR
Title Towards Learning Implicit Symbolic Representation for Visual Reasoning Abstract Visual reasoning tasks are designed to test a learning algorithm’s capability to infer causal relationships, discover object interactions, and understand temporal dynamics, all from visual cues. It is commonly believed that to achieve compositional generalization on visual reasoning, an explicit abstraction of the visual scene must be constructed; for example, object detection can be applied to the visual input to produce representations that are then processed by a neural network or a neuro-symbolic framework. We demonstrate that a simple and general self-supervised approach is able to learn implicit symbolic representations with general-purpose neural networks, enabling the end-to-end learning of visual reasoning directly from raw visual inputs. Our proposed approach “compresses” each frame of a video into a small set of tokens with a transformer network. The self-supervised learning objective is to reconstruct each image based on the compressed temporal context. To minimize the reconstruction loss, the network must learn a compact representation for each image, as well as capture temporal dynamics and object permanence from temporal context. We evaluate the proposed approach on two visual reasoning benchmarks, CATER and ACRE. We observe that self-supervised pretraining is essential to achieve compositional generalization for our end-to-end trained neural network, and our proposed method achieves on par or better performance compared to recent neuro-symbolic approaches that often require additional object-level supervision. 1 INTRODUCTION This paper investigates if an end-to-end trained neural network is able to solve challenging visual reasoning tasks (Zhang et al., 2021; Girdhar & Ramanan, 2019; Yi et al., 2019) that involve inferring causal relationships, discovering object relations, and capturing temporal dynamics. A prominent approach (Shamsian et al., 2020) for visual reasoning is to construct a structured and interpretable representation from the visual inputs, and then apply symbolic programs (Mao et al., 2019) or neural networks (Ding et al., 2021) to solve the reasoning task. Despite their appealing properties, such as being interpretable and easier to inject expert knowledge into the learning framework, it is practically challenging to determine what types of symbols to use and how to detect them reliably from visual data. In fact, the suitable symbolic representation for a single scene may differ significantly across different tasks: the representation for modeling a single human’s kinematics (e.g. with body parts and joints) is unlikely to be the same as that for modeling group social behaviors (e.g. each pedestrian can be viewed as a whole entity). With the success of unified neural frameworks for multi-task learning (Bommasani et al., 2021), it is desirable to have a unified input interface (e.g. raw pixels) and have the neural network learn to dynamically extract suitable representations for different tasks. However, how to learn distributed representation with a deep neural network that behaves and generalizes similarly to learning methods based on symbolic representation (Zhang et al., 2021) for visual reasoning remains an open problem. The key hypothesis we make in this paper is that a general-purpose neural network, such as Transformers (Vaswani et al., 2017), can be turned into an implicit symbolic concept learner with selfsupervised pre-training. For reasoning with image and video cues, the concepts are often organized as object-centric, as objects usually serve as the basic units in visual reasoning tasks. Our proposed approach is inspired by the success of self-supervised learning of object detectors with neural networks (Burgess et al., 2019; Locatello et al., 2020; Niemeyer & Geiger, 2021) and the emergence of object masks in self-supervised classification networks (Caron et al., 2021). It is also motivated by concept binding in neuroscience (Treisman, 1996; Roskies, 1999; Feldman, 2013) and in machine learning (Greff et al., 2020), where concept binding for raw visual inputs refers to the process of segregating and representing visual scenes into a collection of (distributed) concept representation, which can be composed and utilized to solve downstream recognition and reasoning tasks. The concepts are bound in an object-centric fashion, where attributes (e.g. colors, shapes, sizes) of the same objects are associated via dynamic information routing. Different from explicit symbolic representation, implicit symbolic representation via dynamic information binding in a neural network does not require predefining the concept vocabulary or the construction of concept classifiers. The implicit representation can also be “finetuned” directly on the target tasks, it does not suffer from the early commitment or loss of information issues which may happen when visual inputs are converted into symbols and frozen descriptors (e.g. via object detection and classification). Our proposed representation learning framework, implicit symbolic concept learner (IS-CL) consists of two main components: first, a single image is compressed into a small set of tokens with a neural network. This is achieved by a vision transformer (ViT) network (Dosovitskiy et al., 2020) with multiple “slot” tokens (e.g. the [CLS] token in ViT) that attend to the image inputs. Second, the slot tokens are provided as context information via a temporal transformer network for other images in the same video, where the goal is to perform video reconstruction via the masked autoencoding (He et al., 2022) objective with the temporal context. Despite its simplicity, the reconstruction objective motivates the emergence of two desired properties in the pretrained network: first, to provide context useful for video reconstruction, the image encoder must learn a compact representation of the scene with its slot tokens. Second, to utilize the context cues, the temporal transformer must learn to associate objects and their implicit representation across time (“implicit tracking”), and also capture the notion of object permanence – the existence of an object even when it is occluded from the visual observations. One intuitive way to view our proposed IS-CL framework is from the perspective of Slot Attention model by Locatello et al. (2020): Instead of using a shared slot attention module to iteratively refine the encoded tokens, our image encoder is implemented as a stack of Transformer encoder layers with dedicated “slot” tokens. This generalization enables us to directly transfer the pretrained implicit symbolic representation encoded by expressive ViT backbones directly to downstream reasoning tasks. To validate our proposed framework, we conduct extensive ablation experiments on the Compositional Actions and TEmporal Reasoning (CATER) (Girdhar & Ramanan, 2019) benchmark and the Abstract Causal REasoning (ACRE) (Zhang et al., 2021) benchmark. We observe that the self-supervised representation learned by IS-CL indeed behave likes the symbolic representation, in the sense that when finetuned on CATER and ACRE, our learned representation achieves competitive or better generalization performance when compared with the frameworks that use explicit object-centric representation. Intriguingly, we observe that the network inductive biases, such as the number of slot tokens per image, play an important role on transfer learning performance: On both datasets, we observe that a small number of slot tokens per image (1 for CATER and 4 for ACRE) lead to the best transfer learning performance on visual reasoning tasks. To the best of our knowledge, our proposed framework is the first to achieve competitive performance on CATER and ACRE without the need to construct explicit symbolic representation from visual inputs. In summary, our paper makes the following two main contributions: First, unlike common assumptions made by neuro-symbolic approaches, we demonstrate that compositional generalization for visual reasoning can be achieved with end-to-end neural networks and implicit symbolic representations. Second, we propose a self-supervised representation learning framework IS-CL, to learn implicit symbolic representation with general-purpose Transformer neural networks. As a byproduct, we show that the learned representation achieves competitive performance on the challenging CATER and ACRE visual reasoning benchmarks. The code and pretrained checkpoints will be released upon paper acceptance. 2 RELATED WORK Neural Network Pretraining. We have collectively made huge progress towards building unified learning frameworks for a wide range of tasks, including natural language understanding (Devlin et al., 2018; Radford et al., 2019; Brown et al., 2020; Liu et al., 2019), visual recognition (Kokkinos, 2017; Kendall et al., 2018; Zamir et al., 2018; Ghiasi et al., 2021), and multimodal perception (Jaegle et al., 2021; Sun et al., 2019; Likhosherstov et al., 2021; Girdhar et al., 2022; Alayrac et al., 2022). As this pretraining-adaptation learning paradigm gains momentum, researchers at Stanford (Bommasani et al., 2021) have even coined the term “foundation models” to refer to these pretrained neural networks. Unfortunately, most of the “foundation models” for visual data focus on perception tasks, such as object classification, detection, or image captioning. Despite improved empirical performance on the visual question answering task (Hudson & Manning, 2019; Antol et al., 2015; Zellers et al., 2019), visual reasoning remains challenging when measured on more controlled benchmarks that require compositional generalization and causal learning (Zhang et al., 2021; Girdhar & Ramanan, 2019; Chen et al., 2022). It is commonly believed that symbolic or neurosymbolic methods (Mao et al., 2019; Yi et al., 2018; Lake & Baroni, 2018; Andreas, 2019), as opposed to the general-purpose neural networks, are required to achieve generalizable visual reasoning Yi et al. (2019); Zhang et al. (2021). To our knowledge, our proposed framework is the first to demonstrate the effectiveness of implicit symbolic representation on these visual reasoning benchmarks. Self-supervised Learning from Images and Videos. Self-supervised learning methods aim to learn strong visual representations from unlabelled datasets using pre-text tasks. Pre-text tasks were initially hand-designed to incorporate visual priors (Doersch et al., 2015; Zhang et al., 2016; Caron et al., 2018). Subsequent works used contrastive formulations which encourage different augmented views of the same input to map to the same feature representation, whilst preventing the model from collapsing to trivial solutions (Oord et al., 2018; Chen et al., 2020; He et al., 2020; Grill et al., 2020; Akbari et al., 2021). Our work is most related to masked self-supervised approaches. Early works in this area used stacked autoencoders (Vincent et al., 2010) or inpainting tasks (Pathak et al., 2016) with convolutional networks. These approaches have seen a resurgence recently, inspired by BERT (Devlin et al., 2018) and vision transformers (Dosovitskiy et al., 2020). BEiT (Bao et al., 2022) encodes masked patches with discrete variational autoencoders and predicts these tokens. Masked Autoencoders (MAE) (He et al., 2022), on the other hand, simply regress to the pixel values of these tokens. Masked Feature Prediction (Wei et al., 2022) (MFP) also regresses to pixelwise targets, but feature transformations of them as opposed to the direct RGB values as MAE. MAE and MFP have also both been extended to video too (Tong et al., 2022; Feichtenhofer et al., 2022), and are shown to be effective in object detection Li et al. (2022). The video reconstruction objective is also based on masked autoencoding, however, the goal is to learn a compact “implicit symbolic” representation for reasoning as opposed to generic visual descriptors for recognition tasks. We confirm empirically that the proposed method outperforms MAE and VideoMAE pretraining methods by large margins on the CATER and ACRE benchmarks. Object-centric Representation for Reasoning. Most of the existing neuro-symbolic (Mao et al., 2019; Yi et al., 2018) and neural network (Ding et al., 2021) based visual reasoning frameworks require a “preprocessing” stage of symbolic representation construction, which often involves detecting and classifying objects and their attributes from image or video inputs. Our proposed framework aims to investigate the effectiveness of single-stage, end-to-end neural networks for visual reasoning, which is often more desirable than the two-stage frameworks for scenarios that require transfer learning or multi-task learning. In order to obtain the object-centric, or symbolic representation in the preprocessing stage, one can rely on a supervised object detector (Mao et al., 2019), such as Mask R-CNN (He et al., 2017). An alternative approach is to employ self-supervised objectives and learn low-level features that are correlated with objects, such as textures (Geirhos et al., 2018; Hermann et al., 2020; Olah et al., 2017), or objects themselves (Burgess et al., 2019; Locatello et al., 2020; Caron et al., 2021). In practice, supervised or self-supervised approaches for object detection and object-centric representation learning may suffer from the lack of supervised annotations, or the noisy object detection results. For example, Zhang et al. (2022) observed that object-centric representation is beneficial for transfer learning to temporal event classification only when the ground truth object detections are used. 3 METHOD We now introduce the proposed implicit symbolic concept learning (IS-CL) framework. We follow the pretraining and transfer learning paradigm: During pretraining (Figure 2), we task a shared image encoder to output patch-level visual embeddings along with a small set of slot tokens that compress the image’s information. The pretraining objective is masked autoencoding (MAE) for unlabeled video frames, namely reconstructing the pixel values for a subset of “masked” image patches, given the “unmasked” image patches as context. Compared to the standard MAE for images (He et al., 2022), the image decoder has access to two additional types of context information: (1) The encoded patch embedding from the unmasked image patches of the neighboring frames; (2) The encoded slot tokens from a subset of context frames. The context information is encoded and propagated by a temporal transformer network. To successfully reconstruct a masked frame, the image encoder must learn a compact representation of the full image via the slot tokens, and the temporal transformer has to learn to capture object permenance and temporal dynamics. During transfer learning (Figure 3), the image decoder can be discarded, and only the image encoder and temporal transformer need to be transferred. The inputs to the temporal transformer are the slot tokens encoded from individual, unmasked video frames. We consider the full finetuning strategy where the weights of both the newly added task decoder (e.g. a linear classifier), and the pretrained image and temporal transformers are updated during transfer learning. Image Encoder: We adopt the Vision Transformer (ViT) backbone to encode each image independently: An input image is broken into non-overlapping patches of 16⇥16 pixels, which are then linearly projected into patch embeddings as inputs to the transformer encoder. Spatial information is preserved by sinusoidal positional encodings. We use the standard ViT-Base configuration which has 12 Transformer encoder layers. Each layer has hidden size of 768, MLP projection size of 3072, and 12 attention heads. During pretraining, a subset of video frames are spatially masked randomly given a masking ratio. As illustrated in Figure 2, only the unmasked image patches are fed into the ViT-B encoder. For context frames and during transfer learning, all image patches are provided as inputs to the image encoder. Slot Tokens: In the seminal work by Locatello et al. (2020), slot tokens are defined as the representational bottleneck in an image autoencoder, where the slot representations are iteratively updated with a GRU after the slots attend to the visual inputs in each iteration. We borrow their terminology, and also use slots to denote the representational bottleneck which we hope to encode symbolic, or object-centric information. We generalize their slot update rules by: (1) iteratively updating the input representation from raw pixels to visual representation encoded by the Transformer encoder (ViT); (2) replacing cross-attention with multi-headed self-attention; (3) using MLP layers with untied weights to update the intermediate slot representation as opposed to a shared GRU network. These two modifications allow us to implement “slot attention” directly with a Transformer encoder, simply by prepending slot tokens as additional inputs to the encoder (similar to [CLS] tokens). The initial slot embeddings at the input of the visual encoder are implemented as a learnable embedding lookup table. To compare the effectiveness of different methods to aggregate “slot” information, we also explore single-headed soft attention and Gumbel-max attention as used by Xu et al. (2022). Temporal Transformer: To propagate temporal information across frames, we use another transformer encoder (with fewer layers than the ViT-B image encoder) which takes the tokens encoded by the image encoder as its inputs. During pretraining, the slot tokens from context frames, along with the unmasked patch tokens from the query frames are concatenated together and fed into the temporal transformer. For each query image, the temporal transformer outputs its corresponding unmasked patch tokens contextualized from both the unmasked patches from neighboring query frames and the slot tokens from context frames. The contextualized patches are then fed into the image decoder to compute the reconstruction loss. To preserve temporal position information, we use learned positional embeddings (implemented with an embedding lookup table). During transfer learning, the temporal transformer takes the slot tokens encoded by the image encoder as its inputs. Putting the image encoder and the temporal transformer together, the overall video encoder used for transfer learning can be viewed as an factorized space-time encoder proposed by Arnab et al. (2021). It is more parameter-efficient than the vanilla video vision transformer used by Tong et al. (2022). Image Decoder for Pre-training: We use the same image decoder as in (He et al., 2022). As illustrated in Figure 2, the query images are decoded independently given the contextualized unmasked patch tokens. The image decoder is implemented with another transformer, where masked patch tokens are appended to the contextualized unmasked patch tokens as inputs to the image decoder. Sinusoidal positional encodings are used to indicate the spatial locations of individual patch tokens. We use the same number of layers, hidden size, and other hyperparameters as recommended by He et al. (2022). For pre-training purpose, we use mean squared error to measure the distance between the original query image patches and the reconstructed patches. Transfer Learning: As the goal of pre-training is to learn the slot tokens which we hope to compress an input image into several implicitly symbolic tokens, we only ask the image encoder to generate the slot tokens during finetuning (Figure 3), which are fed to the temporal transformer as its inputs. We then average pool the output tokens of the temporal transformer and add a task-specific decoder to make predictions. Both benchmarks used in our experiments can be formulated as multi-class classification: For CATER, the goal is to predict the final location of the golden snitch (Figure 4 top), where the location is quantized into one of the 6⇥6 positions; For ACRE, the goal is to predict whether the platform will activate, not activate, or undetermined given a query scenario (Figure 4 bottom). We hence use linear classifiers as the task-specific decoders and the standard softmax cross-entropy for transfer learning. 4 EXPERIMENTS We present results on CATER (Girdhar & Ramanan, 2019) and ACRE (Zhang et al., 2021). 4.1 EXPERIMENTAL SETUP Benchmarks: In the classic “shell game”, a ball is placed under a cup and shuffled with other empty cups on a flat surface; then, the objective is to determine which cup in the final shuffled configuration contains the ball. Inspired by this, CATER is a dataset composed of videos of CLEVR (Johnson et al., 2017) objects as they move around the scene. A special golden ball, called the “snitch”, is present in each video, and the associated reasoning task is to determine the snitch’s position at the final frame. Object locations in the CATER dataset are denoted by positions on an invisible 6-by-6 grid; therefore, in essence, the CATER task boils down to a 36-way classification problem. Solving this task is complicated by the fact that larger objects can visually occlude smaller ones, and certain objects can be picked up and placed down to explicitly cover other objects; when an object is covered, it changes position in consistence with the larger object that covers it. Therefore, in order to solve the task successfully, a model must learn to reason not only about objects and movement, but also about object permanence, long-term occlusions, and recursive covering relationships. The CATER dataset features a split where the camera is statically fixed to a particular angle and position throughout the videos, as well as a moving camera split where the viewing angle is able to change over time. We use the static split for evaluation. Each video has 300 frames. A visualization of a CATER video and the associated snitch localization task is shown in Figure 4 (top). The ACRE dataset tests a model’s ability to understand and discover causal relationships. The construction of the dataset is motivated by the Blicket experiment in developmental psychology (Gopnik & Sobel, 2000), where there is a platform as well as many distinct objects, some of which contain the “Blicketness” property. When at least one object with the “Blicketness” property is placed on the platform, music will be played; otherwise, the platform will maintain silence. Given a few context demonstrations of different object combinations, as well as the resulting effect, young children have been shown to successfully infer which objects contain the “Blicketness” property, and which combinations would cause the platform to play music. In ACRE, the platform is represented by a large pink block that either glows or remains dim depending on the combination of CLEVR objects placed on it. Given six evidence frames of objects placed on the platform, the objective of the reasoning task is to determine the effect a query frame, containing a potentially novel object combination, would have on the platform. Possible answers include lighting up the platform, keeping the platform dim, or unable to be determined with the given evidence frames. A visualization of an example ACRE sample is shown in the bottom row of Figure 4 (bottom). Pretraining data: We use the unlabeled videos from the training and validation splits of the CATER dataset for pretraining. Both the static and moving camera splits are used, which contains 9,304 videos in total. In our experiments, we observe that ACRE requires higher resolution inputs during pretraining and finetuning. Our default preprocessing setup is to randomly sample 32 frames of 64⇥64 for pretraining checkpoints to be transferred to CATER, and 16 frames of 224⇥224 for pretraining checkpoints to be transferred to ACRE. The randomly sampled frames are sorted to preserve the arrow of time information. No additional data augmentations are performed. Transfer learning: For CATER, we evaluate on the static split which has 3,065 training, 768 validation, and 1645 test examples. We select the hyperparameters based on the validation performance, then use both training and validation data to train the model to be evaluated on the test split. By de- fault, we use 100 randomly sampled frames of 64⇥64 during training, and 100 uniformly sampled frames of stride 3 during evaluation. For ACRE, we explore all three splits, all of which contain 24,000 training, 8,000 validation, and 8,000 test examples. We again use the validation set to select hyperparameters and use both training and validation to obtain the models evaluated on the test split. We use all seven frames of 224⇥224 during training and evaluation. Default hyperparameters: We use Adam optimizer for pretraining at learning rate of 10 3, and AdamW optimizer for transfer learning at learning rate of 5 ⇥ 10 5. The pretraining checkpoints are trained from scratch for 1,000 epochs at batch size of 256. For transfer learning, we finetune the pretrained checkpoints for 500 epochs at batch size of 512. All experiments are performed on TPU with 32 cores. Below we study the impact of several key model hyperparameters. 4.2 ABLATION STUDY We use CATER for ablation study in Table 1, and reuse the optimal hyperparameters in ACRE experiments. The impact of the number of slot tokens for ACRE is studied separately in Table 2. Masking ratio: Contrary to the large masking ratio employed in vanilla MAE, we found that the optimal masking ratio was 37.5% on downstream CATER accuracy. This is perhaps due to the fact that CATER is designed to test “compositional generalization”, and so the spatial context provides less information than in natural images and video. Number of Total Frames and Context Frames: We also study the impact of the number of frames the implicit symbolic concept learner is pretrained on, and find the best performance on 32 frames. Fixing the total number of pretraining frames, we then ablate over the number of context frames, which are the frames from which slot representations are generated. When 0 context frames are used, we essentially utilize only patch-level representations to perform reconstruction with the temporal transformer (simulating a per-frame MAE followed by a temporal transformer). We find that the best performance is achieved with 8 context frames, which balances the number of slot representations with patch-level representations. Number of Slot Tokens: Another useful ablation is on the impact of the number of slots used for CATER and ACRE. In CATER, we find that only 1 slot token per frame is enough to solve the reasoning task. We believe that this may be due to how the reasoning objective of CATER is designed: to successfully perform snitch localization, the model need only maintain an accurate prediction of where the snitch actually or potentially is, and can ignore more detailed representation of other objects in the scene. Under the hypothesis that the slot tokens represent symbols, perhaps the singular slot token is enough to contain the snitch location. On the other hand, when ablating over the number of tokens for the ACRE task (Table 3), we find that a higher number of tokens is beneficial for reasoning performance. This can potentially be explained by the need to model multiple objects across evidence frames in order to solve the final query; under our belief that slot tokens are encoding symbols, multiple may be needed in order to achieve the best final performance. Slot Pooling Layer and Method: We ablate over which layer to pool over to generate the slot tokens. The patch tokens are discarded after the pooling layer, and only the slot tokens are further processed by the additional Transformer encoder layers. As expected, it is desirable to use all image encoder layers to process both slot and patch tokens. Additionally, we also study the impact of slot pooling method, and observe that adding additional single-headed soft attention and Gumbel-max attention are outperformed by simply using the slot tokens directly. 4.3 COMPARISON TO THE STATE-OF-THE-ART Table 4 compares the result of IS-CL against other state-of-the-art models on CATER snitch localization. We also compare IS-CL on ACRE against other existing models in Table 5. We pretrain MAE and VideoMAE ourselves on the same pretraining dataset and searched for their corresponding optimal hyperparameters. We observe that the spacetime ViViT used by VideoMAE leads to collapsed training, and modified it to use factorized encoder. Other results are cited from the published results. IS-CL achieves the best performance among the approaches that do not dependent on explicit object-centric representation, it also achieves overall state-of-the-art performance on the comp and iid splits of ACRE. 5 CONCLUSION AND FUTURE WORK In this work we propose the implicit symbolic concept learner (IS-CL) framework, which trains a neural network end-to-end to solve complex visual reasoning tasks, without explicitly constructing an object-centric representation. IS-CL learns such implicit symbolic representations as slot embeddings in a pretraining step through a self-supervised video reconstruction objective via masking. We observe the exciting results that the learned representation behave like their symbolic counterparts, when measured on compositional generalization performance on CATER and ACRE benchmarks. Future work includes probing experiments to understand the information encoded by the slot tokens, and applying IS-CL to large-scale natural image and video datasets.
1. What is the main contribution of the paper, and how does it build upon previous works in the field? 2. What are the strengths and weaknesses of the proposed framework, particularly in its ability to reason about videos? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or suggestions regarding the title and definition of the proposed framework? 5. Are there any questions or suggestions regarding the experimental results and comparisons with other works? 6. Are there any technical questions or suggestions regarding the implementation and extension of the proposed framework? 7. Are there any clarification questions regarding the training objective, architecture, and attention mechanism of the model? 8. Are there any suggestions for improving the completeness and impact of the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes the framework "Implicit Symbolic Concept Learner" (IS-CL), a transformer-based architecture that is capable of reasoning about videos. The framework follows a pretraining-transfer-learning pipeline. That is, the model is first pretrained on a collection of unlabelled videos (by the MAE objective), and then finetuned on the target task (e.g., predicting the location of a certain object). Strengths And Weaknesses The paper presentation is clear and well-connected with existing works such as NS-CL and ALOE (Figure 1). There are a few places where the authors connect their work with Slot Attention, which has greatly helped me to understand the connections and differences between the proposed framework and others. The proposed framework is also relatively straightforward, with clear motivation. Experimental results have shown the success of the proposed framework. Furthermore, the ablation studies seem very adequate. Talking about weaknesses, I will start with a few conceptual ones. First, the idea of "concept learning," following the notations from Mao et al, primarily refers to the correspondence between linguistic units (words, phrases) and visual representations (e.g., red, cubic, left-of, etc.). This seems to be different from the definition of "concepts" or "implicit symbolic concept" in the paper, which, in my understanding, refers to "implicitly defined objects." Second, when we talk about "visual concepts,", they are usually more "abstract" than "pixel reconstruction." However, since the overall training paradigm (pretraining time) is to reconstruct the pixels, I don't think there is evidence that the model is capable of discovering "symbolic concepts," e.g., colors, shapes, etc., from the pertaining. With all that, I am wondering if the title/model should be better phrased as "implicit object learning" or similar phrases? Here're a few technical questions. First, I am wondering why the authors have not tried visual question-answering benchmarks such as CLEVRER, especially given that based on the ablation study (Table 2a), CATER does not require an "implicit symbolic" representation (slot =1 then basically it's just a per-image representation). It seems that the framework can be easily extended to that, as in ALOE. And I think extending the framework to that can significantly improve the completeness of the paper. Second, in Table 4, it seems that the model still underperforms several baselines. Although authors may argue that those better-performing algorithms are "object-centric" I think it is still completely reasonable to compare IS-CL with ALOE, because ALOE also does not use any object-detection labels during training. Again, I think adding a new benchmark that showcases the advantage of the proposed method will be ideal. Third, more of a clarification question, regarding the comparison between VideoMAE and IS-CL. Is the only difference that you are using a "single-token" embedding for each frame whereas VideoMAE uses all visual tokens? Because these two models have very similar training objectives and similar architectures. Fourth, is there any way that we can visualize the learned "implicit slot tokens?" For example, can you visualize the attention maps? Finally, you mentioned that you have changed the slot-attention-style encoding with a customized transformer-style encoding. Do you have any ablation studies on that? Clairification questions: Page 5: allowing the slots to attend not only to raw visual inputs, but also to the encoded patch-level representation. Can you be more specific about this? Clarity, Quality, Novelty And Reproducibility The paper is well presented and has novelty over existing works.
ICLR
Title Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies Abstract A main theoretical interest in biology and physics is to identify the nonlinear dynamical system (DS) that generated observed time series. Recurrent Neural Networks (RNNs) are, in principle, powerful enough to approximate any underlying DS, but in their vanilla form suffer from the exploding vs. vanishing gradients problem. Previous attempts to alleviate this problem resulted either in more complicated, mathematically less tractable RNN architectures, or strongly limited the dynamical expressiveness of the RNN. Here we address this issue by suggesting a simple regularization scheme for vanilla RNNs with ReLU activation which enables them to solve long-range dependency problems and express slow time scales, while retaining a simple mathematical structure which makes their DS properties partly analytically accessible. We prove two theorems that establish a tight connection between the regularized RNN dynamics and its gradients, illustrate on DS benchmarks that our regularization approach strongly eases the reconstruction of DS which harbor widely differing time scales, and show that our method is also en par with other long-range architectures like LSTMs on several tasks. 1 INTRODUCTION Theories in the natural sciences are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS). Such systems exhibit a range of common phenomena, like (limit) cycles, chaotic attractors, or specific bifurcations, which are the subject of nonlinear dynamical systems theory (DST; Strogatz (2015); Ott (2002)). A long-standing desire is to retrieve the generating dynamical equations directly from observed time series data (Kantz & Schreiber, 2004), and thus to ‘automatize’ the laborious process of scientific theory building to some degree. A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (Chen et al., 2017; Champion et al., 2019; Ayed et al., 2019; Koppe et al., 2019; Hamilton et al., 2017; Razaghi & Paninski, 2019; Hernandez et al., 2020). Often these are based on sufficiently expressive series expansions for approximating the unknown system of generative equations, such as polynomial basis expansions (Brunton et al., 2016; Champion et al., 2019) or recurrent neural networks (RNNs) (Vlachas et al., 2018; Hernandez et al., 2020; Durstewitz, 2017; Koppe et al., 2019). Formally, RNNs are (usually discrete-time) nonlinear DS that are dynamically universal in the sense that they can approximate to arbitrary precision the flow field of any other DS on compact sets of the real space (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Hanson & Raginsky, 2020). Hence, RNNs seem like a good choice for reconstructing – in this sense of dynamically equivalent behavior – the set of governing equations underlying real time series data. However, RNNs in their vanilla form suffer from the ‘vanishing or exploding gradients’ problem (Hochreiter & Schmidhuber, 1997; Bengio et al., 1994): During training, error gradients tend to either exponentially explode or decay away across successive time steps, and hence vanilla RNNs face severe problems in capturing long time scales or long-range dependencies in the data. Specially designed RNN architectures equipped with gating mechanisms and linear memory cells have been proposed for mitigating this issue (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). However, from a DST perspective, simpler models that can be more easily analyzed and interpreted in DS 1Department of Theoretical Neuroscience, 2Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University 3Faculty of Physics and Astronomy, Heidelberg University & Bernstein Center Computational Neuroscience ∗These authors contributed equally †Corresponding author: daniel.durstewitz@zi-mannheim.de terms (Monfared & Durstewitz, 2020a;b), and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS (Koppe et al., 2019; Hernandez et al., 2020; Zhao & Park, 2020), would be preferable. More recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNNs by initializing or constraining the recurrent weight matrix to be the identity (Le et al., 2015), orthogonal (Henaff et al., 2016; Helfrich et al., 2018) or unitary (Arjovsky et al., 2016). While merely initialization-based solutions, however, may be unstable and quickly dissolve during training, orthogonal or unitary constraints, on the other hand, are too restrictive for reconstructing DS, and more generally from a computational perspective as well (Kerg et al., 2019): For instance, neither chaotic behavior (that requires diverging directions) nor multi-stability, that is the coexistence of several distinct attractors, are possible. Here we therefore suggest a different solution to the problem which takes inspiration from computational neuroscience: Supported by experimental evidence (Daie et al., 2015; Brody et al., 2003), line or plane attractors have been suggested as a dynamical mechanism for maintaining arbitrary information in working memory (Seung, 1996; Machens et al., 2005), a goal-related active form of shortterm memory. A line or plane attractor is a continuous set of marginally stable fixed points to which the system’s state converges from some neighborhood, while along the line itself there is neither connor divergence (Fig. 1A). Hence, a line attractor will perform a perfect integration of inputs and retain updated states indefinitely, while a slightly detuned line attractor will equip the system with arbitrarily slow time constants (Fig. 1B). This latter configuration has been suggested as a dynamical basis for neural interval timing (Durstewitz, 2003; 2004). The present idea is to exploit this dynamical setup for long short-term memory and arbitrary slow time scales by forcing part of the RNN’s subspace toward a plane (line) attractor configuration through specifically designed regularization terms. Specifically, our goal here is not so much to beat the state of the art on long short-term memory tasks, but rather to address the exploding vs. vanishing gradient problem within a simple, dynamically tractable RNN, optimized for DS reconstruction and interpretation. For this we build on piecewiselinear RNNs (PLRNNs) (Koppe et al., 2019; Monfared & Durstewitz, 2020b) which employ ReLU activation functions. PLRNNs have a simple mathematical structure (see eq. 1) which makes them dynamically interpretable in the sense that many geometric properties of the system’s state space can in principle be computed analytically, including fixed points, cycles, and their stability (Suppl. 6.1.2; Koppe et al. (2019); Monfared & Durstewitz (2020a)), i.e. do not require numerical techniques (Sussillo & Barak, 2013). Moreover, PLRNNs constitute a type of piecewise linear (PWL) map for which many important bifurcations have been comparatively well characterized (Monfared & Durstewitz, 2020a; Avrutin et al., 2019). PLRNNs can furthermore be translated into equivalent continuous time ordinary differential equation (ODE) systems (Monfared & Durstewitz, 2020b) which comes with further advantages for analysis, e.g. continuous flow fields (Fig. 1A,B). We retain the PLRNN’s structural simplicity and analytical tractability while mitigating the exploding vs. vanishing gradient problem by adding special regularization terms for a subset of PLRNN units to the loss function. These terms are designed to push the system toward line attractor configurations, without strictly enforcing them, along some – but not all – directions in state space. We further establish a tight mathematical relationship between the PLRNN dynamics and the behavior of its gradients during training. Finally, we demonstrate that our approach outperforms LSTM and other, initialization-based, methods on a number of ‘classical’ machine learning benchmarks (Hochreiter & Schmidhuber, 1997). Much more importantly in the present DST context, we demonstrate that our new regularization-supported inference efficiently captures all relevant time scales when reconstructing challenging nonlinear DS with multiple short- and long-range phenomena. 2 RELATED WORK Dynamical systems reconstruction. From a natural science perspective, the goal of reconstructing or identifying the underlying DS is substantially more ambitious than (and different from) building a system that ‘merely’ yields good ahead predictions: In DS identification we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties (see section 3.5, Fig. S2; Kantz & Schreiber (2004)). Earlier work using RNNs for DS reconstruction (Roweis & Ghahramani, 2002; Yu et al., 2005) mainly focused on inferring the posterior over latent trajectories Z = {z1, . . . ,zT } given time series data X = {x1, . . . ,xT }, p(Z|X), and on ahead predictions (Lu et al., 2017), as does much of the recent work on variational inference of DS (Duncker et al., 2019; Zhao & Park, 2020; Hernandez et al., 2020). Although this enables insight into the dynamics along the empirically observed trajectories, both – posterior inference and good ahead predictions – do not per se guarantee that the inferred models can generate the underlying attractor geometries on their own (see Fig. S2, Koppe et al. (2019)). In contrast, if fully generative reconstruction of the underlying DS in this latter sense were achieved, formal analysis or simulation of the resulting RNN equations could provide a much deeper understanding of the dynamical mechanisms underlying empirical observations (Fig. 1 C). Some approaches geared toward this latter goal of full DS reconstruction make specific structural assumptions about the form of the DS equations (‘white box approach’; Meeds et al. (2019); Raissi (2018); Gorbach et al. (2017)), e.g. based on physical or biological domain knowledge, and focus on estimating the system’s latent states and parameters, rather than approximating an unknown DS based on the observed time series information alone (‘black box approach’). Others (Trischler & D’Eleuterio, 2016; Brunton et al., 2016; Champion et al., 2019) attempt to approximate the flow field, obtained e.g. by numerical differentiation, directly through basis expansions or neural networks. However, numerical derivatives are problematic for their high variance and other numerical issues (Raissi, 2018; Baydin et al., 2018; Chen et al., 2017). Another factor to consider is that in many biological systems like the brain the intrinsic dynamics are highly stochastic with many noise sources, like probabilistic synaptic release (Stevens, 2003). Models that do not explicitly account for dynamical process noise (Ayed et al., 2019; Champion et al., 2019; Rudy et al., 2019) are therefore less suited and more vulnerable to model misspecification. Finally, some fully probabilistic models for DS reconstruction based on GRU (Fraccaro et al., 2016), LSTM (Zheng et al., 2017; Vlachas et al., 2018), or radial basis function (Zhao & Park, 2020) networks, are not easily interpretable and amenable to DS analysis in the sense defined in sect. 3.3. Most importantly, none of these previous approaches consider the long-range dependency problem within more easily tractable RNNs for DS. Long-range dependency problems in RNNs. Error gradients in vanilla RNNs tend to either explode or vanish due to the large product of derivative terms that results from recursive application of the chain rule over time steps (Hochreiter, 1991; Bengio et al., 1994; Hochreiter & Schmidhuber, 1997). To address this issue, RNNs with gated memory cells (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) have been specifically designed, but their more complicated mathematical structure makes them less amenable to a systematic DS analysis. Even simple objects like fixed points of these systems have to be found by numerical techniques (Sussillo & Barak, 2013; Jordan et al., 2019). Thus, approaches which retain the simplicity of vanilla RNNs while solving the exploding vs. vanishing gradients problem would be desirable. Recently, Le et al. (2015) observed that initialization of the recurrent weight matrixW to the identity in ReLU-based RNNs may yield performance en par with LSTMs on standard machine learning benchmarks. Talathi & Vartak (2016) expanded on this idea by initializing the recurrence matrix such that its largest absolute eigenvalue is 1. Later work en- forced orthogonal (Henaff et al., 2016; Helfrich et al., 2018; Jing et al., 2019) or unitary (Arjovsky et al., 2016) constraints on the recurrent weight matrix during training. While this appears to yield long-term memory performance sometimes superior to that of LSTMs (but see (Henaff et al., 2016)), these networks are limited in their computational power (Kerg et al., 2019). This may be a consequence of the fact that RNNs with orthogonal recurrence matrix are quite restricted in the range of dynamical phenomena they can produce, e.g. chaotic attractors are not possible since (locally) diverging eigen-directions are disabled. Our approach therefore is to establish line/plane attractors only along some but not all directions in state space, and to only push the RNN toward these configurations but not strictly enforce them, such that convergence or (local) divergence of RNN dynamics is still possible. We furthermore implement these concepts through regularization terms in the loss functions, rather than through mere initialization. This way plane attractors are encouraged throughout training without fading away. 3 MODEL FORMULATION AND THEORETICAL ANALYSIS 3.1 BASIC MODEL FORMULATION Assume we are given two multivariate time series S = {st} and X = {xt}, one we will denote as ‘inputs’ (S) and the other as ‘outputs’ (X). In the ‘classical’ (supervised) machine learning setting, we usually wish to map S on X through a RNN with latent state equation zt = Fθ (zt−1, st) and outputs xt ∼ pλ (xt|zt), as for instance in the ‘addition problem’ (Hochreiter & Schmidhuber, 1997). In DS reconstruction, in contrast, we usually have a dense time seriesX from which we wish to infer (unsupervised) the underlying DS, where S may provide an additional forcing function or sparse experimental inputs or perturbations. While our focus in this paper is on this latter task, DS reconstruction, we will demonstrate that our approach brings benefits in both these settings. Here we consider for the latent model a PLRNN (Koppe et al., 2019) which takes the form zt = Azt−1 +Wφ(zt−1) +Cst + h+ εt, εt ∼ N (0,Σ), (1) where zt ∈ RM×1 is the hidden state (column) vector of dimensionM ,A ∈ RM×M a diagonal and W ∈ RM×M an off-diagonal matrix, st ∈ RK×1 the external input of dimension K, C ∈ RM×K the input mapping, h ∈ RM×1 a bias, and εt a Gaussian noise term with diagonal covariance matrix diag(Σ) ∈ RM+ . The nonlinearity φ(z) is a ReLU, φ(z)i = max(0, zi), i ∈ {1, . . . ,M}. This specific formulation represents a discrete-time version of firing rate (population) models as used in computational neuroscience (Song et al., 2016; Durstewitz, 2017; Engelken et al., 2020). We will assume that the latent RNN states zt are coupled to the actual observations xt through a simple observation model of the form xt = Bg(zt) + ηt, ηt ∼ N (0,Γ) (2) in the case of observations xt ∈ RN×1, whereB ∈ RN×M is a factor loading matrix, g some (usually monotonic) nonlinear transfer function (e.g., ReLU), and diag(Γ) ∈ RN+ the diagonal covariance matrix of the Gaussian observation noise, or through a softmax function in case of categorical observations xi,t ∈ {0, 1} (see Suppl. 6.1.7 for details). 3.2 REGULARIZATION APPROACH First note that by letting A = I , W = 0, and h = 0 in eq. 1, every point in z space will be a marginally stable fixed point of the system, leading it to perform a perfect integration of external inputs as in parametric working memory (Machens et al., 2005; Brody et al., 2003).1 This is similar in spirit to Le et al. (2015) who initialized RNN parameters such that it performs an identity mapping for zi,t ≥ 0. However, here 1) we use a neuroscientifically motivated network architecture (eq. 1) that enables the identity mapping across the variables’ entire support, zi,t ∈ [−∞,+∞], which we conjecture will be of advantage for establishing long short-term memory properties, 2) we encourage 1Note that this very property of marginal stability required for input integration also makes the system sensitive to noise perturbations directly on the manifold attractor. Interestingly, this property has indeed been observed experimentally for real neural integrator systems (Major et al., 2004; Mizumori & Williams, 1993). this mapping only for a subset Mreg ≤M of units (Fig. S1), leaving others free to perform arbitrary computations, and 3) we stabilize this configuration throughout training by introducing a specific L2 regularization for parameters A, W , and h in eq. 1. When embedded into a larger, (locally) convergent system, we will call this configuration more generally a manifold attractor. That way, we divide the units into two types, where the regularized units serve as a memory that tends to decay very slowly (depending on the size of the regularization term), while the remaining units maintain the flexibility to approximate any underlying DS, yet retaining the simplicity of the original PLRNN (eq. 1). Specifically, the following penalty is added to the loss function (Fig. S1): Lreg = τA Mreg∑ i=1 (Ai,i − 1)2 + τW Mreg∑ i=1 M∑ j=1 j 6=i W 2i,j + τh Mreg∑ i=1 h2i (3) (Recall from sect. 3.1 thatA is a diagonal andW is an off-diagonal matrix.) While this formulation allows us to trade off, for instance, the tendency toward a manifold attractor (A → I , h → 0) vs. the sensitivity to other units’ inputs (W → 0), for all experiments performed here a common value, τA = τW = τh = τ , was assumed for the three regularization factors. We will refer to (z1 . . . zMreg ) as the regularized (‘memory’) subsystem, and to (zMreg+1 . . . zM ) as the non-regularized (‘computational’) subsystem. Note that in the limit τ →∞ exact manifold attractors would be enforced. 3.3 THEORETICAL ANALYSIS We will now establish a tight connection between the PLRNN dynamics and its error gradients. Similar ideas appeared in Chang et al. (2019), but these authors focused only on fixed point dynamics, while here we will consider the more general case including cycles of any order. First, note that by interpretability of model eq. 1 we mean that it is easily amenable to a rigorous DS analysis: As shown in Suppl. 6.1.2, we can explicitly determine all the system’s fixed points and cycles and their stability. Moreover, as shown in Monfared & Durstewitz (2020b), we can – under certain conditions – transform the PLRNN into an equivalent continuous-time (ODE) piecewise-linear system, which brings further advantages for DS analysis. Let us rewrite eq. 1 in the form zt = F (zt−1) = (A+WDΩ(t−1))zt−1 + h := WΩ(t−1) zt−1 + h, (4) where DΩ(t−1) is the diagonal matrix of outer derivatives of the ReLU function evaluated at zt−1 (see Suppl. 6.1.2), and we ignore external inputs and noise terms for now. Starting from some initial condition z1, we can recursively develop zT as (see Suppl. 6.1.2 for more details): zT = F T−1(z1) = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (5) Likewise, for some common loss function L(A,W ,h) = ∑T t=2 Lt, we can recursively develop the derivatives w.r.t. weights wmk (and similar for components ofA and h) as ∂L ∂wmk = T∑ t=2 ∂Lt ∂zt ∂zt ∂wmk , with ∂zt ∂wmk = 1(m,k)DΩ(t−1) zt−1 (6) + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j)zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk , where 1(m,k) is an M ×M indicator matrix with a 1 for the (m, k)’th entry and 0 everywhere else. Observing that eqs. 5 and 6 contain similar product terms which determine the system’s long-term behavior, our first theorem links the PLRNN dynamics to its total error gradients: Theorem 1. Consider a PLRNN given by eq. 4, and assume that it converges to a stable fixed point, say zt∗1 := z∗1, or a k-cycle (k > 1) with the periodic points {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, for T →∞. Suppose that, for k ≥ 1 and i ∈ {0, 1, · · · , k − 1}, σmax(WΩ(t∗k−i)) = ∥∥WΩ(t∗k−i)∥∥ < 1, where WΩ(t∗k−i) denotes the Jacobian of the system at zt∗k−i and σmax indicates the largest singular value of a matrix. Then, the 2-norms of the tensors collecting all derivatives, ∥∥∂zT ∂W ∥∥ 2 ,∥∥∂zT ∂A ∥∥ 2 , ∥∥∂zT ∂h ∥∥ 2 , will be bounded from above, i.e. will not diverge for T →∞. Proof. See Suppl. sect. 6.1 (subsection 6.1.3). While Theorem 1 is a general statement about PLRNN dynamics and total gradients, our next theorem more specifically provides conditions under which Jacobians linking temporally distant states zT and zt, T t, will neither vanish nor explode in the regularized PLRNN: Theorem 2. Assume a PLRNN with matrix A + W partitioned as in Fig. S1, i.e. with the first Mreg rows corresponding to those of an M ×M identity matrix. Suppose that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1, i.e. converges to a kcycle with k ≥ 1. Then, for the full system (z1 . . . zM ), the 2-norm of the Jacobians connecting temporally distal states zT and zt will be bounded from above and below for all T > t, i.e. ∞ > ρup ≥ ∥∥∥∂zT∂zt ∥∥∥2 = ∥∥∥∏t<k≤T WΩ(k)∥∥∥2 ≥ ρlow > 0. In particular, for state variables ziT and zjt such that i ∈ {Mreg + 1, · · · ,M} and j ∈ {1, · · · ,Mreg}, i.e. that connect states from the ‘memory’ to those of the ‘computational’ subsystem, one also has∞ > λup ≥ ∣∣∣∂ziT∂zjt ∣∣∣ ≥ λlow > 0 as T − t→∞, i.e. these derivatives will never vanish nor explode. Proof. See Suppl. sect. 6.1 (subsection 6.1.4). The bounds ρup, ρlow, λup, λlow, are given in Suppl. sect. 6.1.4. We remark that when the regularization conditions are not exactly met, i.e. when parametersA andW slightly deviate from those in Fig. S1, memory (and gradients) may ultimately dissipate, but only very slowly, as actually required for temporal processes with very slow yet not infinite time constants (Fig. 1B). 3.4 TRAINING PROCEDURES For the (supervised) machine learning problems, all networks were trained by stochastic gradient descent (SGD) to minimize the squared-error loss between estimated and actual outputs for the addition and multiplication problems, and the cross entropy loss for sequential MNIST (see Suppl. 6.1.7). Adam (Kingma & Ba, 2014) from PyTorch package (Paszke et al., 2017) was used as the optimizer, with a learning rate of 0.001, gradient clip parameter of 10, and batch size of 500. SGD was stopped after 100 epochs and the fit with the lowest loss across all epochs was taken, except for LSTM which was allowed to run for up to 200 epochs as it took longer to converge (Fig. S10). For comparability, the PLRNN latent state dynamics eq. 1 was assumed to be deterministic in this setting (i.e., Σ = 0), g(zt) = zt and Γ = IN in eq. 2. For the regularized PLRNN (rPLRNN), penalty eq. 3 was added to the loss function. For the (unsupervised) DS reconstruction problems, the fully probabilistic, generative RNN eq. 1 was considered. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear state space model (Durbin & Koopman, 2012) with observation and process noise, and an Expectation-Maximization (EM) algorithm that efficiently exploits the model’s piecewise linear structure (Durstewitz, 2017; Koppe et al., 2019) was used to solve for the parameters by maximum likelihood. Details are given in Suppl. 6.1.5. All code used here will be made openly available at https://github.com/DurstewitzLab/reg-PLRNN. 3.5 PERFORMANCE MEASURES For the machine learning benchmarks we employed the same criteria as used for optimization (MSE or cross-entropy, Suppl. 6.1.7) as performance metrics, evaluated across left-out test sets. In addition, we report the relative frequency Pcorrect of correctly predicted trials across the test set (see Suppl. 6.1.7 for details). For DS reconstruction problems, it is not sufficient or even sensible to judge a method’s ability to infer the underlying DS purely based on some form of (ahead-)prediction error like the MSE defined on the time series itself (Ch.12 in Kantz & Schreiber (2004)). Rather, we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties. This is not automatically guaranteed for a model that yields agreeable ahead predictions on a time series (Fig. S2A; cf. Koppe et al. (2019); Wood (2010)). We therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between true and reproduced probability distributions across states in state space to quantify how well an inferred PLRNN captured the underlying dynamics, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) (see Suppl. 6.1.6 for more details). 4 NUMERICAL EXPERIMENTS 4.1 MACHINE LEARNING BENCHMARKS Although not our prime interest here, we first examined how the rPLRNN would fare on supervised machine learning benchmarks where inputs (S) are to be mapped onto target outputs (X) across long time spans (i.e., requiring long short-term maintenance of information), namely the addition and multiplication problems (Talathi & Vartak, 2016; Hochreiter & Schmidhuber, 1997), and sequential MNIST (LeCun et al., 2010). Details of these experimental setups are in Suppl. 6.1.7. Performance of the rPLRNN (eq. 1, eq. 3) on all 3 benchmarks was compared to several other models summarized in Suppl. Table 1. To achieve a meaningful comparison, all models have the same number M = 40 (based on Fig. S3) of hidden states (which gives LSTMs overall about 4 times as many trainable parameters). On all three problems the rPLRNN outperforms all other tested methods, including LSTM, iRNN (RNN initialized by the identity matrix as in Le et al. (2015)), and a version of the orthogonal RNN (oRNN; Vorontsov et al. (2017)) (similar results were obtained for other settings of M and batch size). LSTM performs even worse than iRNN and iPLRNN (PLRNN initialized with the identity as the iRNN), although it had 4 times as many parameters and was given twice as many epochs (and thus opportunities) for training, as it also took longer to converge (Fig. S10). In addition, the iPLRNN tends to perform slightly better than the iRNN on all three problems, suggesting that the specific structure eq. 1 of the PLRNN that allows for a manifold attractor across the variables’ full range may be advantageous to begin with, while the regularization further improves performance. 4.2 NUMERICAL EXPERIMENTS ON DYNAMICAL SYSTEMS WITH DIFFERENT TIME SCALES While it is encouraging that the rPLRNN may perform even better than several previous approaches to the vanishing vs. exploding gradients problem, our major goal here was to examine whether our regularization scheme would help with the (unsupervised) identification of DS that harbor widely different time scales. To test this, we used a biophysical, bursting cortical neuron model with one voltage (V ) and two conductance recovery variables (see Durstewitz (2009)), one slow (h) and one fast (n; Suppl. 6.1.8). Reproduction of this DS is challenging since it produces very fast spikes on top of a slow nonlinear oscillation (Fig. 3D). Only short time series (as in scientific data) of length T = 1500 from this model were provided for training. rPLRNNs with M = {8 . . . 18} states were trained, with the regularization factor varied within τ ∈ {0, 101, 102, 103, 104, 105}/T . Note that for τ = 0 (no regularization), the approach reduces to the standard PLRNN (Koppe et al., 2019). Fig. 3A confirms our intuition that stronger regularization leads to better DS reconstruction as assessed by the KL divergence between true and generated state distributions (similar results were obtained with ahead-prediction errors as a metric, Fig. S4A), accompanied by a likewise decrease in the MSE between the power spectra of true (suppl. eq. 55) and generated (rPLRNN) voltage traces (Fig. 3B). Fig. 3D gives an example of voltage traces (V ) and the slower of the two gating variables (h; see Fig. S5A for variable n) freely simulated (i.e., sampled) from the autonomously running rPLRNN. This illustrates that our model is in principle capable of capturing both the stiff spike dynamics and the slower oscillations in the second gating variable at the same time. Fig. 3C provides more insight into how the regularization worked: While the high frequency components (> 50 Hz) related to the repetitive spiking activity hardly benefited from increasing τ , there was a strong reduction in the MSE computed on the power spectrum for the lower frequency range (≤ 50 Hz), suggesting that increased regularization helps to map slowly evolving components of the dynamics. This result is more general as shown in Fig. S6 for another DS example. In contrast, an orthogonality (Vorontsov et al., 2017) or plain L2 constraint on weight matrices did not help at all on this problem (Fig. S4B). Further insight into the dynamical mechanisms by which the rPLRNN solves the problem can be obtained by examining the latent dynamics: As shown in Fig. 3E (see also Fig. S5), regularized states indeed help to map the slow components of the dynamics, while non-regularized states focus on the fast spikes. These observations further corroborate the findings in Fig. 3C and Fig. S6C. 4.3 REGULARIZATION PROPERTIES AND MANIFOLD ATTRACTORS In Figs. 2 and 3 we demonstrated that the rPLRNN is able to solve problems and reconstruct dynamics that involve long-range dependencies. Figs. 3A,B furthermore directly confirm that solutions improve with stronger regularization, while Figs. 3C,E give insight into the mechanism by which the regularization works. To further verify empirically that our specific form of regularization, eq. 3, is important, Fig. 2 also shows results for a PLRNN with standard L2 norm on a fraction of Mreg/M = 0.5 states (L2pPLRNN). Fig. S7 provides additional results for PLRNNs with L2 norm on all weights and for vanilla L2-regularized RNNs. All these systems fell far behind the performance of the rPLRNN on all tasks tested. Moreover, Fig. 4 reveals that the specific regularization proposed indeed encourages manifold attractors, and that this is not achieved by a standard L2 regularization: In contrast to L2PLRNN, as the regularization factor τ is increased, more and more of the maximum absolute eigenvalues around the system’s fixed points (computed according to eq. 8, sect. 6.1.2) cluster on or near 1, indicating directions of marginal stability in state space. Also, the deviations from 1 become smaller for strongly regularized PLRNNs (Fig. 4B,D), indicating a higher precision in attractor tuning. Fig. S9 in addition confirms that rPLRNN parameters are increasingly driven toward values that would support manifold attractors with stronger regularization. Fig. 3E furthermore suggests that both regularized and non-regularized states are utilized to map the full dynamics. But how should the ratio Mreg/M be chosen in practice? While for the problems here this meta-parameter was determined through ‘classical’ grid-search and cross-validation, Figs. S3 C – E suggest that the precise setting of Mreg/M is actually not overly important: Nearly optimal performance is achieved for a broader range Mreg/M ∈ [0.3, 0.6] on all problems tested. Hence, in practice, setting Mreg/M = 0.5 should mostly work fine. 5 CONCLUSIONS In this work we introduced a simple solution to the long short-term memory problem in RNNs that retains the simplicity and tractability of PLRNNs, yet does not curtail their universal computational capabilities (Koiran et al., 1994; Siegelmann & Sontag, 1995) and their ability to approximate arbitrary DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Trischler & D’Eleuterio, 2016). We achieved this by adding regularization terms to the loss function that encourage the system to form a ‘memory subspace’ (Seung, 1996; Durstewitz, 2003) which would store arbitrary values for, if unperturbed, arbitrarily long periods. At the same time we did not rigorously enforce this constraint, which allowed the system to capture slow time scales by slightly departing from a perfect manifold attractor. In neuroscience, this has been discussed as a dynamical mechanism for regulating the speed of flow in DS and learning of arbitrary time constants not naturally included qua RNN design (Durstewitz, 2003; 2004) (Fig. 1B). While other RNN architectures, including vanilla RNNs, can, in principle, also develop line attractors to solve specific tasks (Maheswaranathan et al., 2019), they are generally much harder to train to achieve this and may exhibit less precise attractor tuning (cf. Fig. 4), which is needed to bridge long time scales (Durstewitz, 2003). Moreover, part of the PLRNN’s latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics (see also Fig. S11 for a chaotic example). We showed that the rPLRNN is en par with or outperforms initialization-based approaches, orthogonal RNNs, and LSTMs on a number of classical benchmarks. More importantly, however, the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction. Similar regularization schemes as proposed here (eq. 3) may, in principle, also be designed for other architectures, but the convenient mathematical form of the PLRNN makes their implementation particularly powerful and straightforward. ACKNOWLEDGEMENTS This work was funded by grants from the German Research Foundation (DFG) to DD (Du 354/10-1, Du 354/8-2 within SPP 1665) and to GK (TRR265: A06 & B08), and under Germany’s Excellence Strategy – EXC-2181 – 390900948 (’Structures’). 6 APPENDIX 6.1 SUPPLEMENTARY TEXT 6.1.1 Simple exact PLRNN solution for addition problem The exact PLRNN parameter settings (cf. eq. 1, eq. 2) for solving the addition problem with 2 units (cf. Fig. 1C) are as follows: A = ( 1 0 0 0 ) ,W = ( 0 1 0 0 ) ,h = ( 0 −1 ) ,C = ( 0 0 1 1 ) ,B = (1 0) (7) 6.1.2 Computation of fixed points and cycles in PLRNN Consider the PLRNN in the form of eq. 4. For clarity, let us define dΩ(t) := (d1, d2, · · · , dM ) as an indicator vector with dm(zm,t) := dm = 1 for all states zm,t > 0 and zeros otherwise, and DΩ(t) := diag(dΩ(t)) as the diagonal matrix formed from this vector. Note that there are at most 2M distinct matricesWΩ(t) as defined in eq. 4, depending on the sign of the components of zt. If h = 0 and WΩ(t) is the identity matrix, then the map F becomes the identity map and so every point z will be a fixed point of F . Otherwise, the fixed points of F can be found solving the equation F (z∗1) = z∗1 as z∗1 = (I −WΩ(t∗1))−1 h = H∗1 h, (8) where z∗1 = zt∗1 = zt∗1−1, if det(I − WΩ(t∗1)) = PWΩ(t∗1)(1) 6= 0, i.e. WΩ(t∗1) has no eigenvalue equal to 1. Stability and type of fixed points (node, saddle, spiral) can then be determined from the eigenvalues of the JacobianA+WDΩ(t∗1) = WΩ(t∗1) (Strogatz (2015)). For k > 1, solving F k(z∗k) = z∗k, one can obtain a k-cycle of the map F with the periodic points {z∗k, F (z∗k), F 2(z∗k), · · · , F k−1(z∗k)}. For this, we first compute F k as follows: zt = F (zt−1) = WΩ(t−1) zt−1 + h, zt+1 = F 2(zt−1) = F (zt) = WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t) + I ) h, zt+2 = F 3(zt−1) = F (zt+1) = WΩ(t+1)WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t+1)WΩ(t) +WΩ(t+1) + I ) h, ... zt+(k−1) = F k(zt−1) = k+1∏ i=2 WΩ(t+(k−i)) zt−1 + [ k∑ j=2 k−j+2∏ i=2 WΩ(t+(k−i)) + I ] h, (9) in which ∏k+1 i=2 WΩ(t+(k−i)) = WΩ(t+(k−2))WΩ(t+(k−3)) · · · WΩ(t−1). Assuming t+(k−1) := t∗k, then the k-cycle is given by the fixed point of the k-times iterated map F k as z∗k = ( I − k∏ i=1 WΩ(t∗k−i) )−1 [ k∑ j=2 k−j+1∏ i=1 WΩ(t∗k−i) + I ] h = H∗k h, (10) where z∗k = zt∗k = zt∗k−k, provided that I − ∏k i=1WΩ(t∗k−i) is invertible. That is det ( I − ∏k i=1WΩ(t∗k−i) ) = P∏k i=1WΩ(t∗k−i) (1) 6= 0 and ∏k i=1WΩ(t∗k−i) := WΩ∗k has no eigenvalue equal to 1. As for the fixed points, we can determine stability of the k-cycle from the eigenvalues of the Jacobians ∏k i=1WΩ(t∗k−i). It may also be helpful to spell out the recursions in eq. 5 and eq. 6 in section 3.3 in a bit more detail. Analogously to the derivations above, for t = 1, 2, . . . , T we can recursively compute z2, z3, . . . ,zT (T ∈ N) as z2 = F (z1) = WΩ(1) z1 + h, z3 = F 2(z1) = F (z2) = WΩ(2)WΩ(1) z1 + ( WΩ(2) + I ) h, ... zT = F T−1(z1) = F (zT−1) = WΩ(T−1)WΩ(T−2) · · ·WΩ(1) z1 + ( WΩ(T−1)WΩ(T−2) · · ·WΩ(2) +WΩ(T−1)WΩ(T−2) · · ·WΩ(3) + · · ·+WΩ(T−1) + I ) h = T−1∏ i=1 WΩ(T−i) z1 + [ T−2∑ j=1 T−j−1∏ i=1 WΩ(T−i) + I ] h = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (11) Likewise, we can write out the derivatives eq. 6 more explicitly as ∂zt ∂wmk = ∂F (zt−1) ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) )∂zt−1 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2) zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )∂zt−2 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2)zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) ) 1(m,k)DΩ(t−3)zt−3 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )( A+WDΩ(t−3) )∂zt−3 ∂wmk = · · · = 1(m,k)DΩ(t−1) zt−1 + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j) zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk (12) where ∂z2∂wmk = ( ∂z1,2 ∂wmk · · · ∂zM,2∂wmk ) with ∂zl,2 ∂wmk = 0∀ l 6= m and ∂zm,2∂wmk = dkzk,1. The derivatives w.r.t. the elements ofA and h can be expanded in a similar way, only that the termsDΩ(t) zt on the last line of eq. 12 need to be replaced by just zt for ∂zt∂amm , and by just a vector of 1’s for ∂zt ∂hm (also, in these cases, the indicator matrix will be the diagonal matrix 1(m,m)). 6.1.3 Proof of Theorem 1 To state the proof, let us rewrite the derivatives of the loss function L(W ,A,h) = ∑T t=1 Lt in the following tensor form: ∂L ∂W = T∑ t=1 ∂Lt ∂W , where ∂Lt ∂W = ∂Lt ∂zt ∂zt ∂W , (13) for which the 3D tensor ∂zt ∂W = ∂z1,t ∂W ∂z2,t ∂W ... ∂zM,t ∂W (14) of dimension M ×M ×M , consists of all the gradient matrices ∂zi,t ∂W = ∂zi,t ∂w11 ∂zi,t ∂w12 · · · ∂zi,t∂w1M ∂zi,t ∂w21 ∂zi,t ∂w22 · · · ∂zi,t∂w2M ... ∂zi,t ∂wM1 ∂zi,t ∂wM2 · · · ∂zi,t∂wMM := ∂zi,t ∂w1∗ ∂zi,t ∂w2∗ ... ∂zi,t ∂wM∗ , i = 1, 2, · · · ,M, (15) where wi∗ ∈ RM is a row-vector. Now, suppose that {z1, z2, z3, . . .} is an orbit of the system which converges to a stable fixed point, i.e. lim T→∞ zT = z ∗k. Then lim T→∞ zT = lim T→∞ ( WΩ(T−1) zT−1 + h ) = z∗1 = WΩ(t∗1) z ∗1 + h, (16) and so lim T→∞ ( WΩ(T−1) ) z∗1 = WΩ(t∗1) z ∗1. (17) Assume that lim T→∞ ( WΩ(T−1) ) = L. Since eq. 17 holds for every z∗1, then substituting z∗1 = eT1 = (1, 0, · · · , 0)T in eq. 17, we can prove that the first column of L equals the first column of WΩ(t∗1). Performing the same procedure for z∗1 = eTi , i = 2, 3, · · · ,M , yields lim T→∞ WΩ(T−1) = WΩ(t∗1). (18) Also, for every i ∈ N (1 < i <∞) lim T→∞ WΩ(T−i) = WΩ(t∗1), (19) i.e. ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ ≤ . (20) Thus, ∥∥WΩ(T−i)∥∥− ∥∥WΩ(t∗1)∥∥ ≤ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ gives ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ . (21) Since T − 1 > T − 2 > · · · > T − i ≥ N , so ∀ > 0 ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ , i = 1, 2, · · · , T −N. (22) Hence ∀ > 0 ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ T−N∏ i=1 ∥∥WΩ(T−i)∥∥ ≤ (∥∥WΩ(t∗1)∥∥+ )T−N . (23) If ∥∥WΩ(t∗1)∥∥ < 1, then for any < 1, considering ̄ ≤ +‖WΩ(t∗1)‖2 < 1, it is concluded that∥∥∥∥∥ limT→∞ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ = limT→∞ ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ limT→∞(∥∥WΩ(t∗1)∥∥+ ̄)T−N = 0. (24) Therefore lim T→∞ T−1∏ i=1 WΩ(T−i) = 0. (25) If the orbit {z1, z2, z3, . . .} tends to a stable k-cycle (k > 1) with the periodic points {F k(z∗k), F k−1(z∗k), F k−2(z∗k), · · · , F (z∗k)} = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, then, denoting the stable k-cycle by Γk = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1), zt∗k , zt∗k−1, · · · , zt∗k−(k−1), · · · }, (26) we have lim T→∞ d(zT ,Γk) = 0. (27) Hence, there exists a neighborhood U of Γk and k sub-sequences {zTkn}∞n=1, {zTkn+1}∞n=1, · · · , {zTkn+(k−1)}∞n=1 of the sequence {zT }∞T=1 such that these sub-sequences belong to U and (i) zTkn+s = F k(zTk(n−1)+s), s = 0, 1, 2, · · · , k − 1, (ii) lim T→∞ zTkn+s = zt∗k−s, s = 0, 1, 2, · · · , k − 1, (iii) for every zT ∈ U there is some s ∈ {0, 1, 2, · · · , k − 1} such that zT ∈ {zTkn+s}∞n=1. In this case, for every zT ∈ U with zT ∈ {zTkn+s}∞n=1 we have lim T→∞ zT = zt∗k−s for some s = 0, 1, 2, · · · , k − 1. Therefore, continuity of F implies that lim T→∞ F (zT ) = F (zt∗k−s) and so lim T→∞ ( WΩ(T ) zT + h ) = WΩ(t∗k−s) zt∗k−s + h. (28) Thus, similarly, we can prove that ∃ s ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T ) = WΩ(t∗k−s). (29) Analogously, for every i ∈ N (1 < i <∞) ∃ si ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T−i) = WΩ(t∗k−si), (30) On the other hand, ∥∥WΩ(t∗k−si)∥∥ < 1 for all si ∈ {0, 1, 2, · · · , k − 1}. So, without loss of generality, assuming max 0≤si≤k−1 {∥∥WΩ(t∗k−si)∥∥} = ∥∥WΩ(t∗k)∥∥ < 1, (31) we can again obtain some relations similar to eq. 23-eq. 25 for t∗k, k ≥ 1. Since {zT−1}∞T=1 is a convergent sequence, so it is bounded, i.e. there exists a real number q > 0 such that ||zT−1|| ≤ q for all T ∈ N. Furthermore, ∥∥DΩ(T−1)∥∥ ≤ 1 for all T . Therefore, by eq. 12 and eq. 23 (for t∗k, k ≥ 1)∥∥∥∥ ∂zT∂wmk ∥∥∥∥ = ∣∣∣∣∣ ∣∣∣∣∣1(m,k)DΩ(T−1) zT−1 + T−1∑ j=2 ( j−1∏ i=1 WΩ(T−i) ) 1(m,k)DΩ(T−j) zT−j + T−1∏ i=1 WΩ(T−i) DΩ(1) z1 ∣∣∣∣∣ ∣∣∣∣∣ (32) ≤ ‖zT−1‖+ [ T−1∑ j=2 ∥∥∥∥∥ j−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖zT−j‖ ] + ∥∥∥∥∥ T−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖z1‖ ≤ q ( 1 + T−1∑ j=2 (∥∥WΩ(t∗k)∥∥+ ̄)j−1 )+ (∥∥WΩ(t∗k)∥∥+ ̄)T−1 ‖z1‖ . (33) Thus, by ∥∥WΩ(t∗k)∥∥+ ̄ < 1, we have lim T→∞ ∥∥∥∥ ∂zT∂wmk ∥∥∥∥ ≤ q(1 + ∥∥WΩ(t∗k)∥∥+ ̄ 1− ∥∥WΩ(t∗k)∥∥− ̄ ) =M <∞, (34) i.e., by eq. 14 and eq. 15, the 2-norm of total gradient matrices and hence ∥∥ ∂zt ∂W ∥∥ 2 will not diverge (explode) under the assumptions of Theorem 1. Analogously, we can prove that ∥∥∂zT ∂A ∥∥ 2 and ∥∥∂zT ∂h ∥∥ 2 will not diverge either. Since, similar as in the derivations above, it can be shown that relation eq. 34 is true for ∥∥∥ ∂zT∂amm ∥∥∥ with q = q̄, where q̄ is the upper bound of ‖zT ‖, as {zT }∞T=1 is convergent. Furthermore, relation eq. 34 also holds for∥∥∥ ∂zT∂hm ∥∥∥ with q = 1. Remark 2.1. By eq. 24 the Jacobian parts ∥∥∥∂zT∂zt ∥∥∥2 connecting any two states zT and zt, T > t, will not diverge either. Corollary 2.1. The results of Theorem 1 are also true ifWΩ(t∗k) is a normal matrix with no eigenvalue equal to one. Proof. If WΩ(t∗k) is normal, then ∥∥WΩ(t∗k)∥∥ = ρ(WΩ(t∗k)) < 1 which satisfies the conditions of Theorem 1. 6.1.4 Proof of Theorem 2 LetA,W andDΩ(k), t < k ≤ T , be partitioned as follows A = ( Ireg O T O Anreg ) , W = ( Oreg O T S Wnreg ) , DΩ(k) = ( Dkreg O T O Dknreg ) , (35) where IMreg×Mreg := Ireg ∈ RMreg×Mreg ,OMreg×Mreg := Oreg ∈ RMreg×Mreg , O,S ∈ R(M−Mreg)×Mreg , A{Mreg+1:M,Mreg+1:M} := Anreg ∈ R(M−Mreg)×(M−Mreg) is a diagonal submatrix,W{Mreg+1:M,Mreg+1:M} := Wnreg ∈ R(M−Mreg)×(M−Mreg) is an off-diagonal sub-matrix (cf. Fig. S1). Moreover, DkMreg×Mreg := D k reg ∈ RMreg×Mreg and Dk{Mreg+1:M,Mreg+1:M} := Dknreg ∈ R(M−Mreg)×(M−Mreg) are diagonal sub-matrices. Then, we have ∏ t<k≤T WΩ(k) = ∏ t<k≤T ( Ireg O T SDkreg Anreg +WnregD k nreg ) := ∏ t<k≤T ( Ireg O T SDkreg W k nreg ) = ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg. ) (36) Therefore, considering the 2-norm, we obtain∥∥∥∥∂zT∂zt ∥∥∥∥ = ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∥ ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg )∥∥∥∥∥ <∞. (37) Moreover 1 ≤ max{1, ρ(WT−t)} = ρ ( ∏ t<k≤T WΩ(k) ) ≤ ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∂zT∂zt ∥∥∥∥ (38) where WT−t := ∏ t<k≤T W k nreg . Therefore, eq. 37 and eq. 38 yield 1 ≤ ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ ≤ ρup <∞. Furthermore, we assumed that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1. Hence, similar to the proof of Theorem 1, it is concluded that lim T→∞ T∏ k=t W knreg = Onreg. (39) On the other hand, by definition ofDΩ(k), for every t < k ≤ T , we have ∥∥Dkreg∥∥ ≤ 1 and so∥∥SDkreg∥∥ ≤ ‖S‖ ∥∥Dkreg∥∥ ≤ ‖S‖ , (40) which, in accordance with the the assumptions of Theorem 1, by convergence of∑T j=2 ∏t+j−1 k=t+1 ∥∥W knreg∥∥ implies lim T→∞ ∥∥∥∥∥∥SDt+1reg + T∑ j=2 ( t+j−1∏ k=t+1 W knreg ) SDt+jreg ∥∥∥∥∥∥ ≤ ‖S‖ ( 1 + lim T→∞ T∑ j=2 t+j−1∏ k=t+1 ∥∥W knreg∥∥) ≤ ‖S‖Mnreg. (41) Thus, denoting Q := SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg SD t+j reg ) , from eq. 41 we deduce that λmax ( lim T→∞ (QTQ) ) = lim T→∞ ρ(QTQ) ≤ lim T→∞ ∥∥QTQ∥∥ = lim T→∞ ‖Q‖2 ≤ ( ‖S‖Mnreg )2 . (42) Now, if T − t tends to∞, then eq. 37, eq. 39 and eq. 42 result in 1 = ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ = σmax( ( Ireg O T Q Onreg )) = √ λmax(Ireg + lim T→∞ (QTQ)) = ρup < ∞. (43) Remark 2.2. If ‖S‖ = 0, then ∥∥∥∂zT∂zt ∥∥∥→ 1 as T − t→∞. 6.1.5 Details on EM algorithm and DS reconstruction For DS reconstruction we request that the latent RNN approximates the true generating system of equations, which is a taller order than learning the mapping S → X or predicting future values in a time series (cf. sect. 3.5).2 This point has important implications for the design of models, inference algorithms and performance metrics if the primary goal is DS reconstruction rather than ‘mere’ time series forecasting.3 In this context we consider the fully probabilistic, generative RNN eq. 1. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear 2By reconstructing the governing equations we mean their approximation in the sense of the universal approximation theorems for DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998), i.e. such that the behavior of the reconstructed system becomes dynamically equivalent to that of the true underlying system. 3In this context we also remark that models which include longer histories of hidden activations (Yu et al., 2019), as in many statistical time series models (Fan & Yao, 2003), are not formally valid DS models anymore since they violate the uniqueness of flow in state space (Strogatz, 2015). state space model (Durbin & Koopman, 2012) with observation and process noise. We solve for the parameters θ = {A,W ,C,h,µ0,Σ,B,Γ} by maximum likelihood, for which an efficient Expectation-Maximization (EM) algorithm has recently been suggested (Durstewitz, 2017; Koppe et al., 2019), which we will summarize here. Since the involved integrals are not tractable, we start off from the evidence-lower bound (ELBO) to the log-likelihood which can be rewritten in various useful ways: log p(X|θ) ≥ EZ∼q[log pθ(X,Z)] +H (q(Z|X)) = log p(X|θ)−DKL (q(Z|X)‖pθ(Z|X)) =: L (θ, q) (44) In the E-step, given a current estimate θ∗ for the parameters, we seek to determine the posterior pθ (Z|X) which we approximate by a global Gaussian q(Z|X) instantiated by the maximizer (mode) Z∗ of pθ(Z|X) as an estimator of the mean, and the negative inverse Hessian around this maximizer as an estimator of the state covariance, i.e. E[Z|X] ≈ Z∗ = arg max Z log pθ(Z|X) = arg max Z [log pθ(X|Z) + log pθ(Z)− log pθ(X)] = arg max Z [log pθ(X|Z) + log pθ(Z)] , (45) since Z integrates out in pθ(X) (equivalently, this result can be derived from a Laplace approximation to the log-likelihood, log p(X|θ) ≈ log pθ(X|Z∗)+log pθ(Z∗)− 12 log |−L ∗|+const, where L∗ is the Hessian evaluated at the maximizer). We solve this optimization problem by a fixed-point iteration scheme that efficiently exploits the model’s piecewise linear structure, as detailed below. Using this approximate posterior for pθ(Z|X), based on the model’s piecewise-linear structure most of the expectation values Ez∼q [φ(z)], Ez∼q [ φ(z)zT ] , and Ez∼q [ φ(z)φ(z)T ] , could be solved for (semi-)analytically (where z is the concatenated vector form of Z, see below). In the M-step, we seek θ∗ := arg maxθ L(θ, q∗), assuming proposal density q∗ to be given from the E-step, which for a Gaussian observation model amounts to a simple linear regression problem (see Suppl. eq. 49). To force the PLRNN to really capture the underlying DS in its governing equations, we use a previously suggested (Koppe et al., 2019) stepwise annealing protocol that gradually shifts the burden of fitting the observationsX from the observation model eq. 2 to the latent RNN model eq. 1 during training, the idea of which is to establish a mapping from latent states Z to observations X first, fixing this, and then enforcing the temporal consistency constraints implied by eq. 1 while accounting for the actual observations. Now we briefly outline the fixed-point-iteration algorithm for solving the maximization problem in eq. 45 (for more details see Durstewitz (2017); Koppe et al. (2019)). Given a Gaussian latent PLRNN and a Gaussian observation model, the joint density p(X,Z) will be piecewise Gaussian, hence eq. 45 piecewise quadratic in Z. Let us concatenate all state variables across m and t into one long column vector z = (z1,1, . . . , zM,1, . . . , z1,T , . . . , zM,T ) T, arrange matrices A, W into large MT ×MT block tri-diagonal matrices, define dΩ := ( 1z1,1>0,1z2,1>0, . . . ,1zM,T>0 )T as an indicator vector with a 1 for all states zm,t > 0 and zeros otherwise, and DΩ := diag(dΩ) as the diagonal matrix formed from this vector. Collecting all terms quadratic, linear, or constant in z, we can then write down the optimization criterion in the form Q∗Ω(z) = − 1 2 [zT ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ ) z − zT (v0 +DΩv1)− (v0 +DΩv1)T z] + const. (46) In essence, the algorithm now iterates between the two steps: 1. Given fixedDΩ, solve z∗ = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 · (v0 +DΩv1) (47) 2. Given fixed z∗, recomputeDΩ until either convergence or one of several stopping criteria (partly likelihood-based, partly to avoid loops) is reached. The solution may afterwards be refined by one quadratic programming step. Numerical experiments showed this algorithm to be very fast and efficient (Durstewitz, 2017; Koppe et al., 2019). At z∗, an estimate of the state covariance is then obtained as the inverse negative Hessian, V = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 . (48) In the M-step, using the proposal density q∗ from the E-step, the solution to the maximization problem θ∗ := arg max θ L(θ, q∗), can generally be expressed in the form θ∗ = (∑ t E [ αtβ T t ])(∑ t E [ βtβ T t ])−1 , (49) where, for the latent model, eq. 1, αt = zt and βt := [ zTt−1, φ(zt−1) T, sTt , 1 ]T ∈ R2M+K+1, and for the observation model, eq. 2, αt = xt and βt = g (zt). 6.1.6 More details on DS performance measure As argued before (Koppe et al., 2019; Wood, 2010), in DS reconstruction we require that the RNN captures the underlying attractor geometries and state space properties. This does not necessarily entail that the reconstructed system could predict future time series observations more than a few time steps ahead, and vice versa. For instance, if the underlying attractor is chaotic, even if we had the exact true system available, with a tiny bit of noise trajectories starting from the same initial condition will quickly diverge and ahead-prediction errors become essentially meaningless as a DS performance metric (Fig. S2B). To quantify how well an inferred PLRNN captured the underlying dynamics we therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between the true and reproduced probability distributions across states in state space, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) rather than in precise matching of time series, DKL (ptrue(x)‖pgen(x|z)) ≈ K∑ k=1 p̂ (k) true(x) log ( p̂ (k) true(x) p̂ (k) gen(x|z) ) , (50) where ptrue(x) is the true distribution of observations across state space (not time!), pgen(x|z) is the distribution of observations generated by running the inferred PLRNN, and the sum indicates a spatial discretization (binning) of the observed state space. We emphasize that p̂(k)gen(x|z) is obtained from freely simulated trajectories, i.e. drawn from the prior p̂(z) specified by eq. 1, not from the inferred posteriors p̂(z|xtrain). In addition, to assess reproduction of time scales by the inferred PLRNN, the average MSE between the power spectra of the true and generated time series was computed, as displayed in Fig. 3B–C. The measure DKL introduced above only works for situations where the ground truth ptrue(X) is known. Following Koppe et al. (2019), we next briefly indicate how a proxy for DKL may be obtained in empirical situations where no ground truth is available. Reasoning that for a well reconstructed DS the inferred posterior pinf(z|x) given the observations should be a good representative of the prior generative dynamics pgen(z), one may use the Kullback-Leibler divergence between the distribution over latent states, obtained by sampling from the prior density pgen(z), and the (dataconstrained) posterior distribution pinf(z|x) (where z ∈ RM×1 and x ∈ RN×1), taken across the system’s state space: DKL (pinf(z|x)‖pgen(z)) = ∫ z∈RM×1 pinf(z|x) log pinf(z|x) pgen(z) dz (51) As evaluating this integral is difficult, one could further approximate pinf(z|x) and pgen(z) by Gaussian mixtures across trajectories, i.e. pinf(z|x) ≈ 1T ∑T t=1 p(zt|x1:T ) and pgen(z) ≈ 1 L ∑L l=1 p(zl|zl−1), where the mean and covariance of p(zt|x1:T ) and p(zl|zl−1) are obtained by marginalizing over the multivariate distributions p(Z|X) and pgen(Z), respectively, yielding E[zt|x1:T ], E[zl|zl−1], and covariance matrices Var(zt|x1:T ) and Var(zl|zl−1). Supplementary eq. 51 may then be numerically approximated through Monte Carlo sampling (Hershey & Olsen, 2007) by DKL (pinf(z|x)‖pgen(z)) ≈ 1 n n∑ i=1 log pinf(z (i)|x) pgen(z(i)) , z(i) ∼ pinf(z|x) (52) Alternatively, there is also a variational approximation of eq. 51 available (Hershey & Olsen, 2007): DvariationalKL (pinf(z|x)‖pgen(z)) ≈ 1 T T∑ t=1 log ∑T j=1 e −DKL(p(zt|x1:T )‖p(zj |x1:T ))∑T k=1 e −DKL(p(zt|x1:T )‖p(zk|zk−1)) , (53) where the KL divergences in the exponentials are among Gaussians for which we have an analytical expression. 6.1.7 More details on benchmark tasks and model comparisons We compared the performance of our rPLRNN to the other models summarized in Suppl. Table 1 on the following three benchmarks requiring long short-term maintenance of information (Talathi & Vartak (2016); Hochreiter & Schmidhuber (1997)): 1) The addition problem of time length T consists of 100 000 training and 10 000 test samples of 2× T input series S = {s1, . . . , sT }, where entries s1,: ∈ [0, 1] are drawn from a uniform random distribution and s2,: ∈ {0, 1} contains zeros except for two indicator bits placed randomly at times t1 < 10 and t2 < T/2. Constraints on t1 and t2 are chosen such that every trial requires a long memory of at least T/2 time steps. At the last time step T , the target output of the network is the sum of the two inputs in s1,: indicated by the 1-entries in s2,:, x target T = s1,t1 + s1,t2 . 2) The multiplication problem is the same as the addition problem, only that the product instead of the sum has to be produced by the RNN as an output at time T , xtargetT = s1,t1 · s1,t2 . 3) The MNIST dataset (LeCun et al., 2010) consists of 60 000 training and 10 000 28 × 28 test images of hand written digits. To make this a time series problem, in sequential MNIST the images are presented sequentially, pixel-by-pixel, scanning lines from upper left to bottom-right, resulting in time series of fixed length T = 784. For training on the addition and multiplication problems, the mean squared-error loss across R samples, L = 1R ∑R n=1 ( x̂ (n) T − x (n) T )2 , between estimated and actual outputs was used, while the cross-entropy loss L = ∑R n=1 ( − ∑10 i=1 x (n) i,T log(p̂ (n) i,T ) ) was employed for sequential MNIST, where p̂i,t := p̂t (xi,t = 1|zt) = ( eBi,:zt ) N∑ j=1 eBj,:zt −1 , (54) with xi,t ∈ {0, 1}, ∑ i xi,t = 1. We remark that as long as the observation model takes the form of a generalized linear model (Fahrmeir & Tutz, 2001), as assumed here, meaning may be assigned to the latent states zm by virtue of their association with specific sets of observations xn through the factor loading matrix B. This adds another layer of model interpretability (besides its accessibility in DS terms). The large error bars in Fig. 2 at the transition from good to bad performance result from the fact that the networks mostly learn these tasks in an all-or-none fashion. While the rPLRNN in general outperformed the pure initialization-based models (iRNN, npRNN, iPLRNN), confirming that a manifold attractor subspace present at initialization may be lost throughout training, we conjecture that this difference in performance will become even more pronounced as noise levels or task complexity increase. 6.1.8 More details on single neuron model The neuron model used in section 4.2 is described by −CmV̇ = gL(V − EL) + gNam∞(V )(V − ENa) + gKn(V − EK) + gMh(V − EK) + gNMDAσ(V )(V − ENMDA) (55) ḣ = h∞(V )− h
1. What is the focus of the paper regarding dynamical system identification? 2. What are the strengths of the proposed approach, particularly in addressing the issue of gradient vanishing or exploding? 3. Do you have any concerns about the choice of regularization technique or its application to RNN models? 4. How does the reviewer assess the clarity and thoroughness of the paper's content? 5. Are there any minor issues or typos that need to be addressed in the paper?
Review
Review The paper explores a very important question in dynamical system identification of how to make recurrent neural networks (RNNs) learn both long-term and short-term dependencies without the gradient vanishing or exploding limitation. They suggest using piece-wise linear RNNs (PLRNNs) with a novel regularization technique. The paper is well written and is very thorough with the necessary theoretical foundation, numerical experiments and analysis. I think the theory and results of this paper are significant and will be relevant to further our understanding of RNNs and system identification. Major points: L2 weight regularization can be easily applied to any of the RNN models used in the experiments. While other weight initialization schemes were compared to the paper's proposed model (rPLRNN), none of the other RNN models had similar regularization. This will shed some light on whether it is indeed the proposed regularization that matters or the full proposed model with PLRNN and a mix of regularized and non-regularized units. It is not clear to me how one can choose the correct ratio of regularized vs unregularized units in the model. While the amount of regularization clearly helps in reducing training error as shown in Figure 3, increasing the ratio of regularized units in Figure S3C did not help the error past 0.1 and then larger values resulted in large increases such that the error at ratio 1 is equivalent to the error at ratio 0. Perhaps this observation is specific to the addition problem, but I feel that a discussion of the effect of this ratio on performance should be included for clarity. Additionally, the ratio of regularized units with best performance could potentially be different for different regularization amounts. Minor Point: g is not defined in equation 2
ICLR
Title Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies Abstract A main theoretical interest in biology and physics is to identify the nonlinear dynamical system (DS) that generated observed time series. Recurrent Neural Networks (RNNs) are, in principle, powerful enough to approximate any underlying DS, but in their vanilla form suffer from the exploding vs. vanishing gradients problem. Previous attempts to alleviate this problem resulted either in more complicated, mathematically less tractable RNN architectures, or strongly limited the dynamical expressiveness of the RNN. Here we address this issue by suggesting a simple regularization scheme for vanilla RNNs with ReLU activation which enables them to solve long-range dependency problems and express slow time scales, while retaining a simple mathematical structure which makes their DS properties partly analytically accessible. We prove two theorems that establish a tight connection between the regularized RNN dynamics and its gradients, illustrate on DS benchmarks that our regularization approach strongly eases the reconstruction of DS which harbor widely differing time scales, and show that our method is also en par with other long-range architectures like LSTMs on several tasks. 1 INTRODUCTION Theories in the natural sciences are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS). Such systems exhibit a range of common phenomena, like (limit) cycles, chaotic attractors, or specific bifurcations, which are the subject of nonlinear dynamical systems theory (DST; Strogatz (2015); Ott (2002)). A long-standing desire is to retrieve the generating dynamical equations directly from observed time series data (Kantz & Schreiber, 2004), and thus to ‘automatize’ the laborious process of scientific theory building to some degree. A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (Chen et al., 2017; Champion et al., 2019; Ayed et al., 2019; Koppe et al., 2019; Hamilton et al., 2017; Razaghi & Paninski, 2019; Hernandez et al., 2020). Often these are based on sufficiently expressive series expansions for approximating the unknown system of generative equations, such as polynomial basis expansions (Brunton et al., 2016; Champion et al., 2019) or recurrent neural networks (RNNs) (Vlachas et al., 2018; Hernandez et al., 2020; Durstewitz, 2017; Koppe et al., 2019). Formally, RNNs are (usually discrete-time) nonlinear DS that are dynamically universal in the sense that they can approximate to arbitrary precision the flow field of any other DS on compact sets of the real space (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Hanson & Raginsky, 2020). Hence, RNNs seem like a good choice for reconstructing – in this sense of dynamically equivalent behavior – the set of governing equations underlying real time series data. However, RNNs in their vanilla form suffer from the ‘vanishing or exploding gradients’ problem (Hochreiter & Schmidhuber, 1997; Bengio et al., 1994): During training, error gradients tend to either exponentially explode or decay away across successive time steps, and hence vanilla RNNs face severe problems in capturing long time scales or long-range dependencies in the data. Specially designed RNN architectures equipped with gating mechanisms and linear memory cells have been proposed for mitigating this issue (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). However, from a DST perspective, simpler models that can be more easily analyzed and interpreted in DS 1Department of Theoretical Neuroscience, 2Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University 3Faculty of Physics and Astronomy, Heidelberg University & Bernstein Center Computational Neuroscience ∗These authors contributed equally †Corresponding author: daniel.durstewitz@zi-mannheim.de terms (Monfared & Durstewitz, 2020a;b), and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS (Koppe et al., 2019; Hernandez et al., 2020; Zhao & Park, 2020), would be preferable. More recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNNs by initializing or constraining the recurrent weight matrix to be the identity (Le et al., 2015), orthogonal (Henaff et al., 2016; Helfrich et al., 2018) or unitary (Arjovsky et al., 2016). While merely initialization-based solutions, however, may be unstable and quickly dissolve during training, orthogonal or unitary constraints, on the other hand, are too restrictive for reconstructing DS, and more generally from a computational perspective as well (Kerg et al., 2019): For instance, neither chaotic behavior (that requires diverging directions) nor multi-stability, that is the coexistence of several distinct attractors, are possible. Here we therefore suggest a different solution to the problem which takes inspiration from computational neuroscience: Supported by experimental evidence (Daie et al., 2015; Brody et al., 2003), line or plane attractors have been suggested as a dynamical mechanism for maintaining arbitrary information in working memory (Seung, 1996; Machens et al., 2005), a goal-related active form of shortterm memory. A line or plane attractor is a continuous set of marginally stable fixed points to which the system’s state converges from some neighborhood, while along the line itself there is neither connor divergence (Fig. 1A). Hence, a line attractor will perform a perfect integration of inputs and retain updated states indefinitely, while a slightly detuned line attractor will equip the system with arbitrarily slow time constants (Fig. 1B). This latter configuration has been suggested as a dynamical basis for neural interval timing (Durstewitz, 2003; 2004). The present idea is to exploit this dynamical setup for long short-term memory and arbitrary slow time scales by forcing part of the RNN’s subspace toward a plane (line) attractor configuration through specifically designed regularization terms. Specifically, our goal here is not so much to beat the state of the art on long short-term memory tasks, but rather to address the exploding vs. vanishing gradient problem within a simple, dynamically tractable RNN, optimized for DS reconstruction and interpretation. For this we build on piecewiselinear RNNs (PLRNNs) (Koppe et al., 2019; Monfared & Durstewitz, 2020b) which employ ReLU activation functions. PLRNNs have a simple mathematical structure (see eq. 1) which makes them dynamically interpretable in the sense that many geometric properties of the system’s state space can in principle be computed analytically, including fixed points, cycles, and their stability (Suppl. 6.1.2; Koppe et al. (2019); Monfared & Durstewitz (2020a)), i.e. do not require numerical techniques (Sussillo & Barak, 2013). Moreover, PLRNNs constitute a type of piecewise linear (PWL) map for which many important bifurcations have been comparatively well characterized (Monfared & Durstewitz, 2020a; Avrutin et al., 2019). PLRNNs can furthermore be translated into equivalent continuous time ordinary differential equation (ODE) systems (Monfared & Durstewitz, 2020b) which comes with further advantages for analysis, e.g. continuous flow fields (Fig. 1A,B). We retain the PLRNN’s structural simplicity and analytical tractability while mitigating the exploding vs. vanishing gradient problem by adding special regularization terms for a subset of PLRNN units to the loss function. These terms are designed to push the system toward line attractor configurations, without strictly enforcing them, along some – but not all – directions in state space. We further establish a tight mathematical relationship between the PLRNN dynamics and the behavior of its gradients during training. Finally, we demonstrate that our approach outperforms LSTM and other, initialization-based, methods on a number of ‘classical’ machine learning benchmarks (Hochreiter & Schmidhuber, 1997). Much more importantly in the present DST context, we demonstrate that our new regularization-supported inference efficiently captures all relevant time scales when reconstructing challenging nonlinear DS with multiple short- and long-range phenomena. 2 RELATED WORK Dynamical systems reconstruction. From a natural science perspective, the goal of reconstructing or identifying the underlying DS is substantially more ambitious than (and different from) building a system that ‘merely’ yields good ahead predictions: In DS identification we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties (see section 3.5, Fig. S2; Kantz & Schreiber (2004)). Earlier work using RNNs for DS reconstruction (Roweis & Ghahramani, 2002; Yu et al., 2005) mainly focused on inferring the posterior over latent trajectories Z = {z1, . . . ,zT } given time series data X = {x1, . . . ,xT }, p(Z|X), and on ahead predictions (Lu et al., 2017), as does much of the recent work on variational inference of DS (Duncker et al., 2019; Zhao & Park, 2020; Hernandez et al., 2020). Although this enables insight into the dynamics along the empirically observed trajectories, both – posterior inference and good ahead predictions – do not per se guarantee that the inferred models can generate the underlying attractor geometries on their own (see Fig. S2, Koppe et al. (2019)). In contrast, if fully generative reconstruction of the underlying DS in this latter sense were achieved, formal analysis or simulation of the resulting RNN equations could provide a much deeper understanding of the dynamical mechanisms underlying empirical observations (Fig. 1 C). Some approaches geared toward this latter goal of full DS reconstruction make specific structural assumptions about the form of the DS equations (‘white box approach’; Meeds et al. (2019); Raissi (2018); Gorbach et al. (2017)), e.g. based on physical or biological domain knowledge, and focus on estimating the system’s latent states and parameters, rather than approximating an unknown DS based on the observed time series information alone (‘black box approach’). Others (Trischler & D’Eleuterio, 2016; Brunton et al., 2016; Champion et al., 2019) attempt to approximate the flow field, obtained e.g. by numerical differentiation, directly through basis expansions or neural networks. However, numerical derivatives are problematic for their high variance and other numerical issues (Raissi, 2018; Baydin et al., 2018; Chen et al., 2017). Another factor to consider is that in many biological systems like the brain the intrinsic dynamics are highly stochastic with many noise sources, like probabilistic synaptic release (Stevens, 2003). Models that do not explicitly account for dynamical process noise (Ayed et al., 2019; Champion et al., 2019; Rudy et al., 2019) are therefore less suited and more vulnerable to model misspecification. Finally, some fully probabilistic models for DS reconstruction based on GRU (Fraccaro et al., 2016), LSTM (Zheng et al., 2017; Vlachas et al., 2018), or radial basis function (Zhao & Park, 2020) networks, are not easily interpretable and amenable to DS analysis in the sense defined in sect. 3.3. Most importantly, none of these previous approaches consider the long-range dependency problem within more easily tractable RNNs for DS. Long-range dependency problems in RNNs. Error gradients in vanilla RNNs tend to either explode or vanish due to the large product of derivative terms that results from recursive application of the chain rule over time steps (Hochreiter, 1991; Bengio et al., 1994; Hochreiter & Schmidhuber, 1997). To address this issue, RNNs with gated memory cells (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) have been specifically designed, but their more complicated mathematical structure makes them less amenable to a systematic DS analysis. Even simple objects like fixed points of these systems have to be found by numerical techniques (Sussillo & Barak, 2013; Jordan et al., 2019). Thus, approaches which retain the simplicity of vanilla RNNs while solving the exploding vs. vanishing gradients problem would be desirable. Recently, Le et al. (2015) observed that initialization of the recurrent weight matrixW to the identity in ReLU-based RNNs may yield performance en par with LSTMs on standard machine learning benchmarks. Talathi & Vartak (2016) expanded on this idea by initializing the recurrence matrix such that its largest absolute eigenvalue is 1. Later work en- forced orthogonal (Henaff et al., 2016; Helfrich et al., 2018; Jing et al., 2019) or unitary (Arjovsky et al., 2016) constraints on the recurrent weight matrix during training. While this appears to yield long-term memory performance sometimes superior to that of LSTMs (but see (Henaff et al., 2016)), these networks are limited in their computational power (Kerg et al., 2019). This may be a consequence of the fact that RNNs with orthogonal recurrence matrix are quite restricted in the range of dynamical phenomena they can produce, e.g. chaotic attractors are not possible since (locally) diverging eigen-directions are disabled. Our approach therefore is to establish line/plane attractors only along some but not all directions in state space, and to only push the RNN toward these configurations but not strictly enforce them, such that convergence or (local) divergence of RNN dynamics is still possible. We furthermore implement these concepts through regularization terms in the loss functions, rather than through mere initialization. This way plane attractors are encouraged throughout training without fading away. 3 MODEL FORMULATION AND THEORETICAL ANALYSIS 3.1 BASIC MODEL FORMULATION Assume we are given two multivariate time series S = {st} and X = {xt}, one we will denote as ‘inputs’ (S) and the other as ‘outputs’ (X). In the ‘classical’ (supervised) machine learning setting, we usually wish to map S on X through a RNN with latent state equation zt = Fθ (zt−1, st) and outputs xt ∼ pλ (xt|zt), as for instance in the ‘addition problem’ (Hochreiter & Schmidhuber, 1997). In DS reconstruction, in contrast, we usually have a dense time seriesX from which we wish to infer (unsupervised) the underlying DS, where S may provide an additional forcing function or sparse experimental inputs or perturbations. While our focus in this paper is on this latter task, DS reconstruction, we will demonstrate that our approach brings benefits in both these settings. Here we consider for the latent model a PLRNN (Koppe et al., 2019) which takes the form zt = Azt−1 +Wφ(zt−1) +Cst + h+ εt, εt ∼ N (0,Σ), (1) where zt ∈ RM×1 is the hidden state (column) vector of dimensionM ,A ∈ RM×M a diagonal and W ∈ RM×M an off-diagonal matrix, st ∈ RK×1 the external input of dimension K, C ∈ RM×K the input mapping, h ∈ RM×1 a bias, and εt a Gaussian noise term with diagonal covariance matrix diag(Σ) ∈ RM+ . The nonlinearity φ(z) is a ReLU, φ(z)i = max(0, zi), i ∈ {1, . . . ,M}. This specific formulation represents a discrete-time version of firing rate (population) models as used in computational neuroscience (Song et al., 2016; Durstewitz, 2017; Engelken et al., 2020). We will assume that the latent RNN states zt are coupled to the actual observations xt through a simple observation model of the form xt = Bg(zt) + ηt, ηt ∼ N (0,Γ) (2) in the case of observations xt ∈ RN×1, whereB ∈ RN×M is a factor loading matrix, g some (usually monotonic) nonlinear transfer function (e.g., ReLU), and diag(Γ) ∈ RN+ the diagonal covariance matrix of the Gaussian observation noise, or through a softmax function in case of categorical observations xi,t ∈ {0, 1} (see Suppl. 6.1.7 for details). 3.2 REGULARIZATION APPROACH First note that by letting A = I , W = 0, and h = 0 in eq. 1, every point in z space will be a marginally stable fixed point of the system, leading it to perform a perfect integration of external inputs as in parametric working memory (Machens et al., 2005; Brody et al., 2003).1 This is similar in spirit to Le et al. (2015) who initialized RNN parameters such that it performs an identity mapping for zi,t ≥ 0. However, here 1) we use a neuroscientifically motivated network architecture (eq. 1) that enables the identity mapping across the variables’ entire support, zi,t ∈ [−∞,+∞], which we conjecture will be of advantage for establishing long short-term memory properties, 2) we encourage 1Note that this very property of marginal stability required for input integration also makes the system sensitive to noise perturbations directly on the manifold attractor. Interestingly, this property has indeed been observed experimentally for real neural integrator systems (Major et al., 2004; Mizumori & Williams, 1993). this mapping only for a subset Mreg ≤M of units (Fig. S1), leaving others free to perform arbitrary computations, and 3) we stabilize this configuration throughout training by introducing a specific L2 regularization for parameters A, W , and h in eq. 1. When embedded into a larger, (locally) convergent system, we will call this configuration more generally a manifold attractor. That way, we divide the units into two types, where the regularized units serve as a memory that tends to decay very slowly (depending on the size of the regularization term), while the remaining units maintain the flexibility to approximate any underlying DS, yet retaining the simplicity of the original PLRNN (eq. 1). Specifically, the following penalty is added to the loss function (Fig. S1): Lreg = τA Mreg∑ i=1 (Ai,i − 1)2 + τW Mreg∑ i=1 M∑ j=1 j 6=i W 2i,j + τh Mreg∑ i=1 h2i (3) (Recall from sect. 3.1 thatA is a diagonal andW is an off-diagonal matrix.) While this formulation allows us to trade off, for instance, the tendency toward a manifold attractor (A → I , h → 0) vs. the sensitivity to other units’ inputs (W → 0), for all experiments performed here a common value, τA = τW = τh = τ , was assumed for the three regularization factors. We will refer to (z1 . . . zMreg ) as the regularized (‘memory’) subsystem, and to (zMreg+1 . . . zM ) as the non-regularized (‘computational’) subsystem. Note that in the limit τ →∞ exact manifold attractors would be enforced. 3.3 THEORETICAL ANALYSIS We will now establish a tight connection between the PLRNN dynamics and its error gradients. Similar ideas appeared in Chang et al. (2019), but these authors focused only on fixed point dynamics, while here we will consider the more general case including cycles of any order. First, note that by interpretability of model eq. 1 we mean that it is easily amenable to a rigorous DS analysis: As shown in Suppl. 6.1.2, we can explicitly determine all the system’s fixed points and cycles and their stability. Moreover, as shown in Monfared & Durstewitz (2020b), we can – under certain conditions – transform the PLRNN into an equivalent continuous-time (ODE) piecewise-linear system, which brings further advantages for DS analysis. Let us rewrite eq. 1 in the form zt = F (zt−1) = (A+WDΩ(t−1))zt−1 + h := WΩ(t−1) zt−1 + h, (4) where DΩ(t−1) is the diagonal matrix of outer derivatives of the ReLU function evaluated at zt−1 (see Suppl. 6.1.2), and we ignore external inputs and noise terms for now. Starting from some initial condition z1, we can recursively develop zT as (see Suppl. 6.1.2 for more details): zT = F T−1(z1) = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (5) Likewise, for some common loss function L(A,W ,h) = ∑T t=2 Lt, we can recursively develop the derivatives w.r.t. weights wmk (and similar for components ofA and h) as ∂L ∂wmk = T∑ t=2 ∂Lt ∂zt ∂zt ∂wmk , with ∂zt ∂wmk = 1(m,k)DΩ(t−1) zt−1 (6) + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j)zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk , where 1(m,k) is an M ×M indicator matrix with a 1 for the (m, k)’th entry and 0 everywhere else. Observing that eqs. 5 and 6 contain similar product terms which determine the system’s long-term behavior, our first theorem links the PLRNN dynamics to its total error gradients: Theorem 1. Consider a PLRNN given by eq. 4, and assume that it converges to a stable fixed point, say zt∗1 := z∗1, or a k-cycle (k > 1) with the periodic points {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, for T →∞. Suppose that, for k ≥ 1 and i ∈ {0, 1, · · · , k − 1}, σmax(WΩ(t∗k−i)) = ∥∥WΩ(t∗k−i)∥∥ < 1, where WΩ(t∗k−i) denotes the Jacobian of the system at zt∗k−i and σmax indicates the largest singular value of a matrix. Then, the 2-norms of the tensors collecting all derivatives, ∥∥∂zT ∂W ∥∥ 2 ,∥∥∂zT ∂A ∥∥ 2 , ∥∥∂zT ∂h ∥∥ 2 , will be bounded from above, i.e. will not diverge for T →∞. Proof. See Suppl. sect. 6.1 (subsection 6.1.3). While Theorem 1 is a general statement about PLRNN dynamics and total gradients, our next theorem more specifically provides conditions under which Jacobians linking temporally distant states zT and zt, T t, will neither vanish nor explode in the regularized PLRNN: Theorem 2. Assume a PLRNN with matrix A + W partitioned as in Fig. S1, i.e. with the first Mreg rows corresponding to those of an M ×M identity matrix. Suppose that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1, i.e. converges to a kcycle with k ≥ 1. Then, for the full system (z1 . . . zM ), the 2-norm of the Jacobians connecting temporally distal states zT and zt will be bounded from above and below for all T > t, i.e. ∞ > ρup ≥ ∥∥∥∂zT∂zt ∥∥∥2 = ∥∥∥∏t<k≤T WΩ(k)∥∥∥2 ≥ ρlow > 0. In particular, for state variables ziT and zjt such that i ∈ {Mreg + 1, · · · ,M} and j ∈ {1, · · · ,Mreg}, i.e. that connect states from the ‘memory’ to those of the ‘computational’ subsystem, one also has∞ > λup ≥ ∣∣∣∂ziT∂zjt ∣∣∣ ≥ λlow > 0 as T − t→∞, i.e. these derivatives will never vanish nor explode. Proof. See Suppl. sect. 6.1 (subsection 6.1.4). The bounds ρup, ρlow, λup, λlow, are given in Suppl. sect. 6.1.4. We remark that when the regularization conditions are not exactly met, i.e. when parametersA andW slightly deviate from those in Fig. S1, memory (and gradients) may ultimately dissipate, but only very slowly, as actually required for temporal processes with very slow yet not infinite time constants (Fig. 1B). 3.4 TRAINING PROCEDURES For the (supervised) machine learning problems, all networks were trained by stochastic gradient descent (SGD) to minimize the squared-error loss between estimated and actual outputs for the addition and multiplication problems, and the cross entropy loss for sequential MNIST (see Suppl. 6.1.7). Adam (Kingma & Ba, 2014) from PyTorch package (Paszke et al., 2017) was used as the optimizer, with a learning rate of 0.001, gradient clip parameter of 10, and batch size of 500. SGD was stopped after 100 epochs and the fit with the lowest loss across all epochs was taken, except for LSTM which was allowed to run for up to 200 epochs as it took longer to converge (Fig. S10). For comparability, the PLRNN latent state dynamics eq. 1 was assumed to be deterministic in this setting (i.e., Σ = 0), g(zt) = zt and Γ = IN in eq. 2. For the regularized PLRNN (rPLRNN), penalty eq. 3 was added to the loss function. For the (unsupervised) DS reconstruction problems, the fully probabilistic, generative RNN eq. 1 was considered. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear state space model (Durbin & Koopman, 2012) with observation and process noise, and an Expectation-Maximization (EM) algorithm that efficiently exploits the model’s piecewise linear structure (Durstewitz, 2017; Koppe et al., 2019) was used to solve for the parameters by maximum likelihood. Details are given in Suppl. 6.1.5. All code used here will be made openly available at https://github.com/DurstewitzLab/reg-PLRNN. 3.5 PERFORMANCE MEASURES For the machine learning benchmarks we employed the same criteria as used for optimization (MSE or cross-entropy, Suppl. 6.1.7) as performance metrics, evaluated across left-out test sets. In addition, we report the relative frequency Pcorrect of correctly predicted trials across the test set (see Suppl. 6.1.7 for details). For DS reconstruction problems, it is not sufficient or even sensible to judge a method’s ability to infer the underlying DS purely based on some form of (ahead-)prediction error like the MSE defined on the time series itself (Ch.12 in Kantz & Schreiber (2004)). Rather, we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties. This is not automatically guaranteed for a model that yields agreeable ahead predictions on a time series (Fig. S2A; cf. Koppe et al. (2019); Wood (2010)). We therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between true and reproduced probability distributions across states in state space to quantify how well an inferred PLRNN captured the underlying dynamics, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) (see Suppl. 6.1.6 for more details). 4 NUMERICAL EXPERIMENTS 4.1 MACHINE LEARNING BENCHMARKS Although not our prime interest here, we first examined how the rPLRNN would fare on supervised machine learning benchmarks where inputs (S) are to be mapped onto target outputs (X) across long time spans (i.e., requiring long short-term maintenance of information), namely the addition and multiplication problems (Talathi & Vartak, 2016; Hochreiter & Schmidhuber, 1997), and sequential MNIST (LeCun et al., 2010). Details of these experimental setups are in Suppl. 6.1.7. Performance of the rPLRNN (eq. 1, eq. 3) on all 3 benchmarks was compared to several other models summarized in Suppl. Table 1. To achieve a meaningful comparison, all models have the same number M = 40 (based on Fig. S3) of hidden states (which gives LSTMs overall about 4 times as many trainable parameters). On all three problems the rPLRNN outperforms all other tested methods, including LSTM, iRNN (RNN initialized by the identity matrix as in Le et al. (2015)), and a version of the orthogonal RNN (oRNN; Vorontsov et al. (2017)) (similar results were obtained for other settings of M and batch size). LSTM performs even worse than iRNN and iPLRNN (PLRNN initialized with the identity as the iRNN), although it had 4 times as many parameters and was given twice as many epochs (and thus opportunities) for training, as it also took longer to converge (Fig. S10). In addition, the iPLRNN tends to perform slightly better than the iRNN on all three problems, suggesting that the specific structure eq. 1 of the PLRNN that allows for a manifold attractor across the variables’ full range may be advantageous to begin with, while the regularization further improves performance. 4.2 NUMERICAL EXPERIMENTS ON DYNAMICAL SYSTEMS WITH DIFFERENT TIME SCALES While it is encouraging that the rPLRNN may perform even better than several previous approaches to the vanishing vs. exploding gradients problem, our major goal here was to examine whether our regularization scheme would help with the (unsupervised) identification of DS that harbor widely different time scales. To test this, we used a biophysical, bursting cortical neuron model with one voltage (V ) and two conductance recovery variables (see Durstewitz (2009)), one slow (h) and one fast (n; Suppl. 6.1.8). Reproduction of this DS is challenging since it produces very fast spikes on top of a slow nonlinear oscillation (Fig. 3D). Only short time series (as in scientific data) of length T = 1500 from this model were provided for training. rPLRNNs with M = {8 . . . 18} states were trained, with the regularization factor varied within τ ∈ {0, 101, 102, 103, 104, 105}/T . Note that for τ = 0 (no regularization), the approach reduces to the standard PLRNN (Koppe et al., 2019). Fig. 3A confirms our intuition that stronger regularization leads to better DS reconstruction as assessed by the KL divergence between true and generated state distributions (similar results were obtained with ahead-prediction errors as a metric, Fig. S4A), accompanied by a likewise decrease in the MSE between the power spectra of true (suppl. eq. 55) and generated (rPLRNN) voltage traces (Fig. 3B). Fig. 3D gives an example of voltage traces (V ) and the slower of the two gating variables (h; see Fig. S5A for variable n) freely simulated (i.e., sampled) from the autonomously running rPLRNN. This illustrates that our model is in principle capable of capturing both the stiff spike dynamics and the slower oscillations in the second gating variable at the same time. Fig. 3C provides more insight into how the regularization worked: While the high frequency components (> 50 Hz) related to the repetitive spiking activity hardly benefited from increasing τ , there was a strong reduction in the MSE computed on the power spectrum for the lower frequency range (≤ 50 Hz), suggesting that increased regularization helps to map slowly evolving components of the dynamics. This result is more general as shown in Fig. S6 for another DS example. In contrast, an orthogonality (Vorontsov et al., 2017) or plain L2 constraint on weight matrices did not help at all on this problem (Fig. S4B). Further insight into the dynamical mechanisms by which the rPLRNN solves the problem can be obtained by examining the latent dynamics: As shown in Fig. 3E (see also Fig. S5), regularized states indeed help to map the slow components of the dynamics, while non-regularized states focus on the fast spikes. These observations further corroborate the findings in Fig. 3C and Fig. S6C. 4.3 REGULARIZATION PROPERTIES AND MANIFOLD ATTRACTORS In Figs. 2 and 3 we demonstrated that the rPLRNN is able to solve problems and reconstruct dynamics that involve long-range dependencies. Figs. 3A,B furthermore directly confirm that solutions improve with stronger regularization, while Figs. 3C,E give insight into the mechanism by which the regularization works. To further verify empirically that our specific form of regularization, eq. 3, is important, Fig. 2 also shows results for a PLRNN with standard L2 norm on a fraction of Mreg/M = 0.5 states (L2pPLRNN). Fig. S7 provides additional results for PLRNNs with L2 norm on all weights and for vanilla L2-regularized RNNs. All these systems fell far behind the performance of the rPLRNN on all tasks tested. Moreover, Fig. 4 reveals that the specific regularization proposed indeed encourages manifold attractors, and that this is not achieved by a standard L2 regularization: In contrast to L2PLRNN, as the regularization factor τ is increased, more and more of the maximum absolute eigenvalues around the system’s fixed points (computed according to eq. 8, sect. 6.1.2) cluster on or near 1, indicating directions of marginal stability in state space. Also, the deviations from 1 become smaller for strongly regularized PLRNNs (Fig. 4B,D), indicating a higher precision in attractor tuning. Fig. S9 in addition confirms that rPLRNN parameters are increasingly driven toward values that would support manifold attractors with stronger regularization. Fig. 3E furthermore suggests that both regularized and non-regularized states are utilized to map the full dynamics. But how should the ratio Mreg/M be chosen in practice? While for the problems here this meta-parameter was determined through ‘classical’ grid-search and cross-validation, Figs. S3 C – E suggest that the precise setting of Mreg/M is actually not overly important: Nearly optimal performance is achieved for a broader range Mreg/M ∈ [0.3, 0.6] on all problems tested. Hence, in practice, setting Mreg/M = 0.5 should mostly work fine. 5 CONCLUSIONS In this work we introduced a simple solution to the long short-term memory problem in RNNs that retains the simplicity and tractability of PLRNNs, yet does not curtail their universal computational capabilities (Koiran et al., 1994; Siegelmann & Sontag, 1995) and their ability to approximate arbitrary DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Trischler & D’Eleuterio, 2016). We achieved this by adding regularization terms to the loss function that encourage the system to form a ‘memory subspace’ (Seung, 1996; Durstewitz, 2003) which would store arbitrary values for, if unperturbed, arbitrarily long periods. At the same time we did not rigorously enforce this constraint, which allowed the system to capture slow time scales by slightly departing from a perfect manifold attractor. In neuroscience, this has been discussed as a dynamical mechanism for regulating the speed of flow in DS and learning of arbitrary time constants not naturally included qua RNN design (Durstewitz, 2003; 2004) (Fig. 1B). While other RNN architectures, including vanilla RNNs, can, in principle, also develop line attractors to solve specific tasks (Maheswaranathan et al., 2019), they are generally much harder to train to achieve this and may exhibit less precise attractor tuning (cf. Fig. 4), which is needed to bridge long time scales (Durstewitz, 2003). Moreover, part of the PLRNN’s latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics (see also Fig. S11 for a chaotic example). We showed that the rPLRNN is en par with or outperforms initialization-based approaches, orthogonal RNNs, and LSTMs on a number of classical benchmarks. More importantly, however, the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction. Similar regularization schemes as proposed here (eq. 3) may, in principle, also be designed for other architectures, but the convenient mathematical form of the PLRNN makes their implementation particularly powerful and straightforward. ACKNOWLEDGEMENTS This work was funded by grants from the German Research Foundation (DFG) to DD (Du 354/10-1, Du 354/8-2 within SPP 1665) and to GK (TRR265: A06 & B08), and under Germany’s Excellence Strategy – EXC-2181 – 390900948 (’Structures’). 6 APPENDIX 6.1 SUPPLEMENTARY TEXT 6.1.1 Simple exact PLRNN solution for addition problem The exact PLRNN parameter settings (cf. eq. 1, eq. 2) for solving the addition problem with 2 units (cf. Fig. 1C) are as follows: A = ( 1 0 0 0 ) ,W = ( 0 1 0 0 ) ,h = ( 0 −1 ) ,C = ( 0 0 1 1 ) ,B = (1 0) (7) 6.1.2 Computation of fixed points and cycles in PLRNN Consider the PLRNN in the form of eq. 4. For clarity, let us define dΩ(t) := (d1, d2, · · · , dM ) as an indicator vector with dm(zm,t) := dm = 1 for all states zm,t > 0 and zeros otherwise, and DΩ(t) := diag(dΩ(t)) as the diagonal matrix formed from this vector. Note that there are at most 2M distinct matricesWΩ(t) as defined in eq. 4, depending on the sign of the components of zt. If h = 0 and WΩ(t) is the identity matrix, then the map F becomes the identity map and so every point z will be a fixed point of F . Otherwise, the fixed points of F can be found solving the equation F (z∗1) = z∗1 as z∗1 = (I −WΩ(t∗1))−1 h = H∗1 h, (8) where z∗1 = zt∗1 = zt∗1−1, if det(I − WΩ(t∗1)) = PWΩ(t∗1)(1) 6= 0, i.e. WΩ(t∗1) has no eigenvalue equal to 1. Stability and type of fixed points (node, saddle, spiral) can then be determined from the eigenvalues of the JacobianA+WDΩ(t∗1) = WΩ(t∗1) (Strogatz (2015)). For k > 1, solving F k(z∗k) = z∗k, one can obtain a k-cycle of the map F with the periodic points {z∗k, F (z∗k), F 2(z∗k), · · · , F k−1(z∗k)}. For this, we first compute F k as follows: zt = F (zt−1) = WΩ(t−1) zt−1 + h, zt+1 = F 2(zt−1) = F (zt) = WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t) + I ) h, zt+2 = F 3(zt−1) = F (zt+1) = WΩ(t+1)WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t+1)WΩ(t) +WΩ(t+1) + I ) h, ... zt+(k−1) = F k(zt−1) = k+1∏ i=2 WΩ(t+(k−i)) zt−1 + [ k∑ j=2 k−j+2∏ i=2 WΩ(t+(k−i)) + I ] h, (9) in which ∏k+1 i=2 WΩ(t+(k−i)) = WΩ(t+(k−2))WΩ(t+(k−3)) · · · WΩ(t−1). Assuming t+(k−1) := t∗k, then the k-cycle is given by the fixed point of the k-times iterated map F k as z∗k = ( I − k∏ i=1 WΩ(t∗k−i) )−1 [ k∑ j=2 k−j+1∏ i=1 WΩ(t∗k−i) + I ] h = H∗k h, (10) where z∗k = zt∗k = zt∗k−k, provided that I − ∏k i=1WΩ(t∗k−i) is invertible. That is det ( I − ∏k i=1WΩ(t∗k−i) ) = P∏k i=1WΩ(t∗k−i) (1) 6= 0 and ∏k i=1WΩ(t∗k−i) := WΩ∗k has no eigenvalue equal to 1. As for the fixed points, we can determine stability of the k-cycle from the eigenvalues of the Jacobians ∏k i=1WΩ(t∗k−i). It may also be helpful to spell out the recursions in eq. 5 and eq. 6 in section 3.3 in a bit more detail. Analogously to the derivations above, for t = 1, 2, . . . , T we can recursively compute z2, z3, . . . ,zT (T ∈ N) as z2 = F (z1) = WΩ(1) z1 + h, z3 = F 2(z1) = F (z2) = WΩ(2)WΩ(1) z1 + ( WΩ(2) + I ) h, ... zT = F T−1(z1) = F (zT−1) = WΩ(T−1)WΩ(T−2) · · ·WΩ(1) z1 + ( WΩ(T−1)WΩ(T−2) · · ·WΩ(2) +WΩ(T−1)WΩ(T−2) · · ·WΩ(3) + · · ·+WΩ(T−1) + I ) h = T−1∏ i=1 WΩ(T−i) z1 + [ T−2∑ j=1 T−j−1∏ i=1 WΩ(T−i) + I ] h = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (11) Likewise, we can write out the derivatives eq. 6 more explicitly as ∂zt ∂wmk = ∂F (zt−1) ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) )∂zt−1 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2) zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )∂zt−2 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2)zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) ) 1(m,k)DΩ(t−3)zt−3 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )( A+WDΩ(t−3) )∂zt−3 ∂wmk = · · · = 1(m,k)DΩ(t−1) zt−1 + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j) zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk (12) where ∂z2∂wmk = ( ∂z1,2 ∂wmk · · · ∂zM,2∂wmk ) with ∂zl,2 ∂wmk = 0∀ l 6= m and ∂zm,2∂wmk = dkzk,1. The derivatives w.r.t. the elements ofA and h can be expanded in a similar way, only that the termsDΩ(t) zt on the last line of eq. 12 need to be replaced by just zt for ∂zt∂amm , and by just a vector of 1’s for ∂zt ∂hm (also, in these cases, the indicator matrix will be the diagonal matrix 1(m,m)). 6.1.3 Proof of Theorem 1 To state the proof, let us rewrite the derivatives of the loss function L(W ,A,h) = ∑T t=1 Lt in the following tensor form: ∂L ∂W = T∑ t=1 ∂Lt ∂W , where ∂Lt ∂W = ∂Lt ∂zt ∂zt ∂W , (13) for which the 3D tensor ∂zt ∂W = ∂z1,t ∂W ∂z2,t ∂W ... ∂zM,t ∂W (14) of dimension M ×M ×M , consists of all the gradient matrices ∂zi,t ∂W = ∂zi,t ∂w11 ∂zi,t ∂w12 · · · ∂zi,t∂w1M ∂zi,t ∂w21 ∂zi,t ∂w22 · · · ∂zi,t∂w2M ... ∂zi,t ∂wM1 ∂zi,t ∂wM2 · · · ∂zi,t∂wMM := ∂zi,t ∂w1∗ ∂zi,t ∂w2∗ ... ∂zi,t ∂wM∗ , i = 1, 2, · · · ,M, (15) where wi∗ ∈ RM is a row-vector. Now, suppose that {z1, z2, z3, . . .} is an orbit of the system which converges to a stable fixed point, i.e. lim T→∞ zT = z ∗k. Then lim T→∞ zT = lim T→∞ ( WΩ(T−1) zT−1 + h ) = z∗1 = WΩ(t∗1) z ∗1 + h, (16) and so lim T→∞ ( WΩ(T−1) ) z∗1 = WΩ(t∗1) z ∗1. (17) Assume that lim T→∞ ( WΩ(T−1) ) = L. Since eq. 17 holds for every z∗1, then substituting z∗1 = eT1 = (1, 0, · · · , 0)T in eq. 17, we can prove that the first column of L equals the first column of WΩ(t∗1). Performing the same procedure for z∗1 = eTi , i = 2, 3, · · · ,M , yields lim T→∞ WΩ(T−1) = WΩ(t∗1). (18) Also, for every i ∈ N (1 < i <∞) lim T→∞ WΩ(T−i) = WΩ(t∗1), (19) i.e. ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ ≤ . (20) Thus, ∥∥WΩ(T−i)∥∥− ∥∥WΩ(t∗1)∥∥ ≤ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ gives ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ . (21) Since T − 1 > T − 2 > · · · > T − i ≥ N , so ∀ > 0 ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ , i = 1, 2, · · · , T −N. (22) Hence ∀ > 0 ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ T−N∏ i=1 ∥∥WΩ(T−i)∥∥ ≤ (∥∥WΩ(t∗1)∥∥+ )T−N . (23) If ∥∥WΩ(t∗1)∥∥ < 1, then for any < 1, considering ̄ ≤ +‖WΩ(t∗1)‖2 < 1, it is concluded that∥∥∥∥∥ limT→∞ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ = limT→∞ ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ limT→∞(∥∥WΩ(t∗1)∥∥+ ̄)T−N = 0. (24) Therefore lim T→∞ T−1∏ i=1 WΩ(T−i) = 0. (25) If the orbit {z1, z2, z3, . . .} tends to a stable k-cycle (k > 1) with the periodic points {F k(z∗k), F k−1(z∗k), F k−2(z∗k), · · · , F (z∗k)} = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, then, denoting the stable k-cycle by Γk = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1), zt∗k , zt∗k−1, · · · , zt∗k−(k−1), · · · }, (26) we have lim T→∞ d(zT ,Γk) = 0. (27) Hence, there exists a neighborhood U of Γk and k sub-sequences {zTkn}∞n=1, {zTkn+1}∞n=1, · · · , {zTkn+(k−1)}∞n=1 of the sequence {zT }∞T=1 such that these sub-sequences belong to U and (i) zTkn+s = F k(zTk(n−1)+s), s = 0, 1, 2, · · · , k − 1, (ii) lim T→∞ zTkn+s = zt∗k−s, s = 0, 1, 2, · · · , k − 1, (iii) for every zT ∈ U there is some s ∈ {0, 1, 2, · · · , k − 1} such that zT ∈ {zTkn+s}∞n=1. In this case, for every zT ∈ U with zT ∈ {zTkn+s}∞n=1 we have lim T→∞ zT = zt∗k−s for some s = 0, 1, 2, · · · , k − 1. Therefore, continuity of F implies that lim T→∞ F (zT ) = F (zt∗k−s) and so lim T→∞ ( WΩ(T ) zT + h ) = WΩ(t∗k−s) zt∗k−s + h. (28) Thus, similarly, we can prove that ∃ s ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T ) = WΩ(t∗k−s). (29) Analogously, for every i ∈ N (1 < i <∞) ∃ si ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T−i) = WΩ(t∗k−si), (30) On the other hand, ∥∥WΩ(t∗k−si)∥∥ < 1 for all si ∈ {0, 1, 2, · · · , k − 1}. So, without loss of generality, assuming max 0≤si≤k−1 {∥∥WΩ(t∗k−si)∥∥} = ∥∥WΩ(t∗k)∥∥ < 1, (31) we can again obtain some relations similar to eq. 23-eq. 25 for t∗k, k ≥ 1. Since {zT−1}∞T=1 is a convergent sequence, so it is bounded, i.e. there exists a real number q > 0 such that ||zT−1|| ≤ q for all T ∈ N. Furthermore, ∥∥DΩ(T−1)∥∥ ≤ 1 for all T . Therefore, by eq. 12 and eq. 23 (for t∗k, k ≥ 1)∥∥∥∥ ∂zT∂wmk ∥∥∥∥ = ∣∣∣∣∣ ∣∣∣∣∣1(m,k)DΩ(T−1) zT−1 + T−1∑ j=2 ( j−1∏ i=1 WΩ(T−i) ) 1(m,k)DΩ(T−j) zT−j + T−1∏ i=1 WΩ(T−i) DΩ(1) z1 ∣∣∣∣∣ ∣∣∣∣∣ (32) ≤ ‖zT−1‖+ [ T−1∑ j=2 ∥∥∥∥∥ j−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖zT−j‖ ] + ∥∥∥∥∥ T−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖z1‖ ≤ q ( 1 + T−1∑ j=2 (∥∥WΩ(t∗k)∥∥+ ̄)j−1 )+ (∥∥WΩ(t∗k)∥∥+ ̄)T−1 ‖z1‖ . (33) Thus, by ∥∥WΩ(t∗k)∥∥+ ̄ < 1, we have lim T→∞ ∥∥∥∥ ∂zT∂wmk ∥∥∥∥ ≤ q(1 + ∥∥WΩ(t∗k)∥∥+ ̄ 1− ∥∥WΩ(t∗k)∥∥− ̄ ) =M <∞, (34) i.e., by eq. 14 and eq. 15, the 2-norm of total gradient matrices and hence ∥∥ ∂zt ∂W ∥∥ 2 will not diverge (explode) under the assumptions of Theorem 1. Analogously, we can prove that ∥∥∂zT ∂A ∥∥ 2 and ∥∥∂zT ∂h ∥∥ 2 will not diverge either. Since, similar as in the derivations above, it can be shown that relation eq. 34 is true for ∥∥∥ ∂zT∂amm ∥∥∥ with q = q̄, where q̄ is the upper bound of ‖zT ‖, as {zT }∞T=1 is convergent. Furthermore, relation eq. 34 also holds for∥∥∥ ∂zT∂hm ∥∥∥ with q = 1. Remark 2.1. By eq. 24 the Jacobian parts ∥∥∥∂zT∂zt ∥∥∥2 connecting any two states zT and zt, T > t, will not diverge either. Corollary 2.1. The results of Theorem 1 are also true ifWΩ(t∗k) is a normal matrix with no eigenvalue equal to one. Proof. If WΩ(t∗k) is normal, then ∥∥WΩ(t∗k)∥∥ = ρ(WΩ(t∗k)) < 1 which satisfies the conditions of Theorem 1. 6.1.4 Proof of Theorem 2 LetA,W andDΩ(k), t < k ≤ T , be partitioned as follows A = ( Ireg O T O Anreg ) , W = ( Oreg O T S Wnreg ) , DΩ(k) = ( Dkreg O T O Dknreg ) , (35) where IMreg×Mreg := Ireg ∈ RMreg×Mreg ,OMreg×Mreg := Oreg ∈ RMreg×Mreg , O,S ∈ R(M−Mreg)×Mreg , A{Mreg+1:M,Mreg+1:M} := Anreg ∈ R(M−Mreg)×(M−Mreg) is a diagonal submatrix,W{Mreg+1:M,Mreg+1:M} := Wnreg ∈ R(M−Mreg)×(M−Mreg) is an off-diagonal sub-matrix (cf. Fig. S1). Moreover, DkMreg×Mreg := D k reg ∈ RMreg×Mreg and Dk{Mreg+1:M,Mreg+1:M} := Dknreg ∈ R(M−Mreg)×(M−Mreg) are diagonal sub-matrices. Then, we have ∏ t<k≤T WΩ(k) = ∏ t<k≤T ( Ireg O T SDkreg Anreg +WnregD k nreg ) := ∏ t<k≤T ( Ireg O T SDkreg W k nreg ) = ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg. ) (36) Therefore, considering the 2-norm, we obtain∥∥∥∥∂zT∂zt ∥∥∥∥ = ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∥ ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg )∥∥∥∥∥ <∞. (37) Moreover 1 ≤ max{1, ρ(WT−t)} = ρ ( ∏ t<k≤T WΩ(k) ) ≤ ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∂zT∂zt ∥∥∥∥ (38) where WT−t := ∏ t<k≤T W k nreg . Therefore, eq. 37 and eq. 38 yield 1 ≤ ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ ≤ ρup <∞. Furthermore, we assumed that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1. Hence, similar to the proof of Theorem 1, it is concluded that lim T→∞ T∏ k=t W knreg = Onreg. (39) On the other hand, by definition ofDΩ(k), for every t < k ≤ T , we have ∥∥Dkreg∥∥ ≤ 1 and so∥∥SDkreg∥∥ ≤ ‖S‖ ∥∥Dkreg∥∥ ≤ ‖S‖ , (40) which, in accordance with the the assumptions of Theorem 1, by convergence of∑T j=2 ∏t+j−1 k=t+1 ∥∥W knreg∥∥ implies lim T→∞ ∥∥∥∥∥∥SDt+1reg + T∑ j=2 ( t+j−1∏ k=t+1 W knreg ) SDt+jreg ∥∥∥∥∥∥ ≤ ‖S‖ ( 1 + lim T→∞ T∑ j=2 t+j−1∏ k=t+1 ∥∥W knreg∥∥) ≤ ‖S‖Mnreg. (41) Thus, denoting Q := SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg SD t+j reg ) , from eq. 41 we deduce that λmax ( lim T→∞ (QTQ) ) = lim T→∞ ρ(QTQ) ≤ lim T→∞ ∥∥QTQ∥∥ = lim T→∞ ‖Q‖2 ≤ ( ‖S‖Mnreg )2 . (42) Now, if T − t tends to∞, then eq. 37, eq. 39 and eq. 42 result in 1 = ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ = σmax( ( Ireg O T Q Onreg )) = √ λmax(Ireg + lim T→∞ (QTQ)) = ρup < ∞. (43) Remark 2.2. If ‖S‖ = 0, then ∥∥∥∂zT∂zt ∥∥∥→ 1 as T − t→∞. 6.1.5 Details on EM algorithm and DS reconstruction For DS reconstruction we request that the latent RNN approximates the true generating system of equations, which is a taller order than learning the mapping S → X or predicting future values in a time series (cf. sect. 3.5).2 This point has important implications for the design of models, inference algorithms and performance metrics if the primary goal is DS reconstruction rather than ‘mere’ time series forecasting.3 In this context we consider the fully probabilistic, generative RNN eq. 1. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear 2By reconstructing the governing equations we mean their approximation in the sense of the universal approximation theorems for DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998), i.e. such that the behavior of the reconstructed system becomes dynamically equivalent to that of the true underlying system. 3In this context we also remark that models which include longer histories of hidden activations (Yu et al., 2019), as in many statistical time series models (Fan & Yao, 2003), are not formally valid DS models anymore since they violate the uniqueness of flow in state space (Strogatz, 2015). state space model (Durbin & Koopman, 2012) with observation and process noise. We solve for the parameters θ = {A,W ,C,h,µ0,Σ,B,Γ} by maximum likelihood, for which an efficient Expectation-Maximization (EM) algorithm has recently been suggested (Durstewitz, 2017; Koppe et al., 2019), which we will summarize here. Since the involved integrals are not tractable, we start off from the evidence-lower bound (ELBO) to the log-likelihood which can be rewritten in various useful ways: log p(X|θ) ≥ EZ∼q[log pθ(X,Z)] +H (q(Z|X)) = log p(X|θ)−DKL (q(Z|X)‖pθ(Z|X)) =: L (θ, q) (44) In the E-step, given a current estimate θ∗ for the parameters, we seek to determine the posterior pθ (Z|X) which we approximate by a global Gaussian q(Z|X) instantiated by the maximizer (mode) Z∗ of pθ(Z|X) as an estimator of the mean, and the negative inverse Hessian around this maximizer as an estimator of the state covariance, i.e. E[Z|X] ≈ Z∗ = arg max Z log pθ(Z|X) = arg max Z [log pθ(X|Z) + log pθ(Z)− log pθ(X)] = arg max Z [log pθ(X|Z) + log pθ(Z)] , (45) since Z integrates out in pθ(X) (equivalently, this result can be derived from a Laplace approximation to the log-likelihood, log p(X|θ) ≈ log pθ(X|Z∗)+log pθ(Z∗)− 12 log |−L ∗|+const, where L∗ is the Hessian evaluated at the maximizer). We solve this optimization problem by a fixed-point iteration scheme that efficiently exploits the model’s piecewise linear structure, as detailed below. Using this approximate posterior for pθ(Z|X), based on the model’s piecewise-linear structure most of the expectation values Ez∼q [φ(z)], Ez∼q [ φ(z)zT ] , and Ez∼q [ φ(z)φ(z)T ] , could be solved for (semi-)analytically (where z is the concatenated vector form of Z, see below). In the M-step, we seek θ∗ := arg maxθ L(θ, q∗), assuming proposal density q∗ to be given from the E-step, which for a Gaussian observation model amounts to a simple linear regression problem (see Suppl. eq. 49). To force the PLRNN to really capture the underlying DS in its governing equations, we use a previously suggested (Koppe et al., 2019) stepwise annealing protocol that gradually shifts the burden of fitting the observationsX from the observation model eq. 2 to the latent RNN model eq. 1 during training, the idea of which is to establish a mapping from latent states Z to observations X first, fixing this, and then enforcing the temporal consistency constraints implied by eq. 1 while accounting for the actual observations. Now we briefly outline the fixed-point-iteration algorithm for solving the maximization problem in eq. 45 (for more details see Durstewitz (2017); Koppe et al. (2019)). Given a Gaussian latent PLRNN and a Gaussian observation model, the joint density p(X,Z) will be piecewise Gaussian, hence eq. 45 piecewise quadratic in Z. Let us concatenate all state variables across m and t into one long column vector z = (z1,1, . . . , zM,1, . . . , z1,T , . . . , zM,T ) T, arrange matrices A, W into large MT ×MT block tri-diagonal matrices, define dΩ := ( 1z1,1>0,1z2,1>0, . . . ,1zM,T>0 )T as an indicator vector with a 1 for all states zm,t > 0 and zeros otherwise, and DΩ := diag(dΩ) as the diagonal matrix formed from this vector. Collecting all terms quadratic, linear, or constant in z, we can then write down the optimization criterion in the form Q∗Ω(z) = − 1 2 [zT ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ ) z − zT (v0 +DΩv1)− (v0 +DΩv1)T z] + const. (46) In essence, the algorithm now iterates between the two steps: 1. Given fixedDΩ, solve z∗ = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 · (v0 +DΩv1) (47) 2. Given fixed z∗, recomputeDΩ until either convergence or one of several stopping criteria (partly likelihood-based, partly to avoid loops) is reached. The solution may afterwards be refined by one quadratic programming step. Numerical experiments showed this algorithm to be very fast and efficient (Durstewitz, 2017; Koppe et al., 2019). At z∗, an estimate of the state covariance is then obtained as the inverse negative Hessian, V = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 . (48) In the M-step, using the proposal density q∗ from the E-step, the solution to the maximization problem θ∗ := arg max θ L(θ, q∗), can generally be expressed in the form θ∗ = (∑ t E [ αtβ T t ])(∑ t E [ βtβ T t ])−1 , (49) where, for the latent model, eq. 1, αt = zt and βt := [ zTt−1, φ(zt−1) T, sTt , 1 ]T ∈ R2M+K+1, and for the observation model, eq. 2, αt = xt and βt = g (zt). 6.1.6 More details on DS performance measure As argued before (Koppe et al., 2019; Wood, 2010), in DS reconstruction we require that the RNN captures the underlying attractor geometries and state space properties. This does not necessarily entail that the reconstructed system could predict future time series observations more than a few time steps ahead, and vice versa. For instance, if the underlying attractor is chaotic, even if we had the exact true system available, with a tiny bit of noise trajectories starting from the same initial condition will quickly diverge and ahead-prediction errors become essentially meaningless as a DS performance metric (Fig. S2B). To quantify how well an inferred PLRNN captured the underlying dynamics we therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between the true and reproduced probability distributions across states in state space, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) rather than in precise matching of time series, DKL (ptrue(x)‖pgen(x|z)) ≈ K∑ k=1 p̂ (k) true(x) log ( p̂ (k) true(x) p̂ (k) gen(x|z) ) , (50) where ptrue(x) is the true distribution of observations across state space (not time!), pgen(x|z) is the distribution of observations generated by running the inferred PLRNN, and the sum indicates a spatial discretization (binning) of the observed state space. We emphasize that p̂(k)gen(x|z) is obtained from freely simulated trajectories, i.e. drawn from the prior p̂(z) specified by eq. 1, not from the inferred posteriors p̂(z|xtrain). In addition, to assess reproduction of time scales by the inferred PLRNN, the average MSE between the power spectra of the true and generated time series was computed, as displayed in Fig. 3B–C. The measure DKL introduced above only works for situations where the ground truth ptrue(X) is known. Following Koppe et al. (2019), we next briefly indicate how a proxy for DKL may be obtained in empirical situations where no ground truth is available. Reasoning that for a well reconstructed DS the inferred posterior pinf(z|x) given the observations should be a good representative of the prior generative dynamics pgen(z), one may use the Kullback-Leibler divergence between the distribution over latent states, obtained by sampling from the prior density pgen(z), and the (dataconstrained) posterior distribution pinf(z|x) (where z ∈ RM×1 and x ∈ RN×1), taken across the system’s state space: DKL (pinf(z|x)‖pgen(z)) = ∫ z∈RM×1 pinf(z|x) log pinf(z|x) pgen(z) dz (51) As evaluating this integral is difficult, one could further approximate pinf(z|x) and pgen(z) by Gaussian mixtures across trajectories, i.e. pinf(z|x) ≈ 1T ∑T t=1 p(zt|x1:T ) and pgen(z) ≈ 1 L ∑L l=1 p(zl|zl−1), where the mean and covariance of p(zt|x1:T ) and p(zl|zl−1) are obtained by marginalizing over the multivariate distributions p(Z|X) and pgen(Z), respectively, yielding E[zt|x1:T ], E[zl|zl−1], and covariance matrices Var(zt|x1:T ) and Var(zl|zl−1). Supplementary eq. 51 may then be numerically approximated through Monte Carlo sampling (Hershey & Olsen, 2007) by DKL (pinf(z|x)‖pgen(z)) ≈ 1 n n∑ i=1 log pinf(z (i)|x) pgen(z(i)) , z(i) ∼ pinf(z|x) (52) Alternatively, there is also a variational approximation of eq. 51 available (Hershey & Olsen, 2007): DvariationalKL (pinf(z|x)‖pgen(z)) ≈ 1 T T∑ t=1 log ∑T j=1 e −DKL(p(zt|x1:T )‖p(zj |x1:T ))∑T k=1 e −DKL(p(zt|x1:T )‖p(zk|zk−1)) , (53) where the KL divergences in the exponentials are among Gaussians for which we have an analytical expression. 6.1.7 More details on benchmark tasks and model comparisons We compared the performance of our rPLRNN to the other models summarized in Suppl. Table 1 on the following three benchmarks requiring long short-term maintenance of information (Talathi & Vartak (2016); Hochreiter & Schmidhuber (1997)): 1) The addition problem of time length T consists of 100 000 training and 10 000 test samples of 2× T input series S = {s1, . . . , sT }, where entries s1,: ∈ [0, 1] are drawn from a uniform random distribution and s2,: ∈ {0, 1} contains zeros except for two indicator bits placed randomly at times t1 < 10 and t2 < T/2. Constraints on t1 and t2 are chosen such that every trial requires a long memory of at least T/2 time steps. At the last time step T , the target output of the network is the sum of the two inputs in s1,: indicated by the 1-entries in s2,:, x target T = s1,t1 + s1,t2 . 2) The multiplication problem is the same as the addition problem, only that the product instead of the sum has to be produced by the RNN as an output at time T , xtargetT = s1,t1 · s1,t2 . 3) The MNIST dataset (LeCun et al., 2010) consists of 60 000 training and 10 000 28 × 28 test images of hand written digits. To make this a time series problem, in sequential MNIST the images are presented sequentially, pixel-by-pixel, scanning lines from upper left to bottom-right, resulting in time series of fixed length T = 784. For training on the addition and multiplication problems, the mean squared-error loss across R samples, L = 1R ∑R n=1 ( x̂ (n) T − x (n) T )2 , between estimated and actual outputs was used, while the cross-entropy loss L = ∑R n=1 ( − ∑10 i=1 x (n) i,T log(p̂ (n) i,T ) ) was employed for sequential MNIST, where p̂i,t := p̂t (xi,t = 1|zt) = ( eBi,:zt ) N∑ j=1 eBj,:zt −1 , (54) with xi,t ∈ {0, 1}, ∑ i xi,t = 1. We remark that as long as the observation model takes the form of a generalized linear model (Fahrmeir & Tutz, 2001), as assumed here, meaning may be assigned to the latent states zm by virtue of their association with specific sets of observations xn through the factor loading matrix B. This adds another layer of model interpretability (besides its accessibility in DS terms). The large error bars in Fig. 2 at the transition from good to bad performance result from the fact that the networks mostly learn these tasks in an all-or-none fashion. While the rPLRNN in general outperformed the pure initialization-based models (iRNN, npRNN, iPLRNN), confirming that a manifold attractor subspace present at initialization may be lost throughout training, we conjecture that this difference in performance will become even more pronounced as noise levels or task complexity increase. 6.1.8 More details on single neuron model The neuron model used in section 4.2 is described by −CmV̇ = gL(V − EL) + gNam∞(V )(V − ENa) + gKn(V − EK) + gMh(V − EK) + gNMDAσ(V )(V − ENMDA) (55) ḣ = h∞(V )− h
1. What is the main contribution of the paper, and how does it address the vanishing and exploding gradient problem in piecewise linear RNNs (PLRNNs)? 2. What is the proposed regularization method, and how does it encourage plane or line attractors in the network dynamics? 3. What are the results of the numerical experiments, and how do they support the effectiveness of the proposed regularization method? 4. What is the reviewer's major concern regarding the motivation of learning line or plane attractors, and how could the authors address this concern? 5. What is the reviewer's suggestion for a missing baseline experiment to demonstrate the effectiveness of the proposed regularization method? 6. How does the proposed regularization method compare to other regularization techniques, such as L2 regularization applied to all weights, in terms of its ability to improve performance and encourage plane or line attractors? 7. What is the reviewer's question regarding the choice of Mreg, and how does the performance vary as a function of Mreg? 8. What is the reviewer's question about the symbol g in equation 2, and how is it defined or explained in the paper? 9. What is the reviewer's surprise regarding the performance of the vanilla RNN in Fig 2A, and what is their intuition for why this might be the case? 10. Are there any minor typos or formatting issues in the paper that should be addressed?
Review
Review This paper proposes a type of regularization for piecewise linear RNNs (PLRNNs) that encourages the network to learn line or plane attractors. The paper argues, through mathematical analysis of the regularized network as well as numerical experiments, that this regularization alleviates the vanishing and exploding gradient problem and allows PLRNNs to reconstruct nonlinear dynamical systems with multiple timescales from noisy observations. Major concern: I found the paper interesting. My main concern has to do with the motivation of learning line or plane attractors. The paper argues that the proposed regularizer will improve performance through a specific mechanism: encouraging plane attractors in the dynamics. While the results in the experiments section are impressive, the paper as far as I can tell does not establish that the regularized PLRNNs have learned plane attractors. For example, it could be that simply adding l2 regularization on all of the weights would also lead to better performance (rather than the specific regularization proposed), or perhaps the better performance arises from some other (undiscovered) mechanism. In fact, to show that the specific form of l2 regularization is what is useful, I think having a PLRNN with standard l2 regularization (applied to all of the weights) is a critical missing baseline. To convince me that the benefit is really due to the given motivation (encouraging plane attractors), I want to see evidence that regularized PLRNNs have learned plane (or line) attractors, compared to unregularized PLRNNs (or better yet, PLRNNs with l2 regularization applied to all of the weights). This can be demonstrated in a number of ways, for example by showing eigenvalues of the Jacobian around fixed points of trained PLRNNs with and without the proposed regularization, on the three tasks in Fig. 2. If this was conclusively demonstrated, I would happily increase my rating. For comparison, recent work showed that RNNs of multiple types (including vanilla RNNs and LSTMs) learned line attractors when solving an NLP task [1]. These line attractors were found in the networks after training, and did not have any special regularization encouraging them. I would appreciate if the authors would add some discussion comparing their regularization to these discovered line attractors. [1] Maheswaranathan et al, NeurIPS 2019 (http://papers.nips.cc/paper/9700-reverse-engineering-recurrent-networks-for-sentiment-classification-reveals-line-attractor-dynamics) Other concerns: How is Mreg (the number of regularized dimensions) chosen? How does the performance vary as a function of Mreg? What's g in eq(2)? I was a little surprised that the vanilla RNN worked so well in Fig 2A. I would have expected an LSTM to work as well, if not better. Do the authors have intuition for why this is? Minor typos: Quotes around automatize in first and last paragraphs of section 1 are incorrect. should be automatize' and classical'. in latex, use the backtick character (`) for the first quote. The abbreviation "RNN" is used as if it is plural, but it is more commonly singular. (e.g. RNNs seem like instead of RNN seem like; or RNNs in their vanilla form instead of RNN in their vanilla form)
ICLR
Title Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies Abstract A main theoretical interest in biology and physics is to identify the nonlinear dynamical system (DS) that generated observed time series. Recurrent Neural Networks (RNNs) are, in principle, powerful enough to approximate any underlying DS, but in their vanilla form suffer from the exploding vs. vanishing gradients problem. Previous attempts to alleviate this problem resulted either in more complicated, mathematically less tractable RNN architectures, or strongly limited the dynamical expressiveness of the RNN. Here we address this issue by suggesting a simple regularization scheme for vanilla RNNs with ReLU activation which enables them to solve long-range dependency problems and express slow time scales, while retaining a simple mathematical structure which makes their DS properties partly analytically accessible. We prove two theorems that establish a tight connection between the regularized RNN dynamics and its gradients, illustrate on DS benchmarks that our regularization approach strongly eases the reconstruction of DS which harbor widely differing time scales, and show that our method is also en par with other long-range architectures like LSTMs on several tasks. 1 INTRODUCTION Theories in the natural sciences are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS). Such systems exhibit a range of common phenomena, like (limit) cycles, chaotic attractors, or specific bifurcations, which are the subject of nonlinear dynamical systems theory (DST; Strogatz (2015); Ott (2002)). A long-standing desire is to retrieve the generating dynamical equations directly from observed time series data (Kantz & Schreiber, 2004), and thus to ‘automatize’ the laborious process of scientific theory building to some degree. A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (Chen et al., 2017; Champion et al., 2019; Ayed et al., 2019; Koppe et al., 2019; Hamilton et al., 2017; Razaghi & Paninski, 2019; Hernandez et al., 2020). Often these are based on sufficiently expressive series expansions for approximating the unknown system of generative equations, such as polynomial basis expansions (Brunton et al., 2016; Champion et al., 2019) or recurrent neural networks (RNNs) (Vlachas et al., 2018; Hernandez et al., 2020; Durstewitz, 2017; Koppe et al., 2019). Formally, RNNs are (usually discrete-time) nonlinear DS that are dynamically universal in the sense that they can approximate to arbitrary precision the flow field of any other DS on compact sets of the real space (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Hanson & Raginsky, 2020). Hence, RNNs seem like a good choice for reconstructing – in this sense of dynamically equivalent behavior – the set of governing equations underlying real time series data. However, RNNs in their vanilla form suffer from the ‘vanishing or exploding gradients’ problem (Hochreiter & Schmidhuber, 1997; Bengio et al., 1994): During training, error gradients tend to either exponentially explode or decay away across successive time steps, and hence vanilla RNNs face severe problems in capturing long time scales or long-range dependencies in the data. Specially designed RNN architectures equipped with gating mechanisms and linear memory cells have been proposed for mitigating this issue (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). However, from a DST perspective, simpler models that can be more easily analyzed and interpreted in DS 1Department of Theoretical Neuroscience, 2Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University 3Faculty of Physics and Astronomy, Heidelberg University & Bernstein Center Computational Neuroscience ∗These authors contributed equally †Corresponding author: daniel.durstewitz@zi-mannheim.de terms (Monfared & Durstewitz, 2020a;b), and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS (Koppe et al., 2019; Hernandez et al., 2020; Zhao & Park, 2020), would be preferable. More recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNNs by initializing or constraining the recurrent weight matrix to be the identity (Le et al., 2015), orthogonal (Henaff et al., 2016; Helfrich et al., 2018) or unitary (Arjovsky et al., 2016). While merely initialization-based solutions, however, may be unstable and quickly dissolve during training, orthogonal or unitary constraints, on the other hand, are too restrictive for reconstructing DS, and more generally from a computational perspective as well (Kerg et al., 2019): For instance, neither chaotic behavior (that requires diverging directions) nor multi-stability, that is the coexistence of several distinct attractors, are possible. Here we therefore suggest a different solution to the problem which takes inspiration from computational neuroscience: Supported by experimental evidence (Daie et al., 2015; Brody et al., 2003), line or plane attractors have been suggested as a dynamical mechanism for maintaining arbitrary information in working memory (Seung, 1996; Machens et al., 2005), a goal-related active form of shortterm memory. A line or plane attractor is a continuous set of marginally stable fixed points to which the system’s state converges from some neighborhood, while along the line itself there is neither connor divergence (Fig. 1A). Hence, a line attractor will perform a perfect integration of inputs and retain updated states indefinitely, while a slightly detuned line attractor will equip the system with arbitrarily slow time constants (Fig. 1B). This latter configuration has been suggested as a dynamical basis for neural interval timing (Durstewitz, 2003; 2004). The present idea is to exploit this dynamical setup for long short-term memory and arbitrary slow time scales by forcing part of the RNN’s subspace toward a plane (line) attractor configuration through specifically designed regularization terms. Specifically, our goal here is not so much to beat the state of the art on long short-term memory tasks, but rather to address the exploding vs. vanishing gradient problem within a simple, dynamically tractable RNN, optimized for DS reconstruction and interpretation. For this we build on piecewiselinear RNNs (PLRNNs) (Koppe et al., 2019; Monfared & Durstewitz, 2020b) which employ ReLU activation functions. PLRNNs have a simple mathematical structure (see eq. 1) which makes them dynamically interpretable in the sense that many geometric properties of the system’s state space can in principle be computed analytically, including fixed points, cycles, and their stability (Suppl. 6.1.2; Koppe et al. (2019); Monfared & Durstewitz (2020a)), i.e. do not require numerical techniques (Sussillo & Barak, 2013). Moreover, PLRNNs constitute a type of piecewise linear (PWL) map for which many important bifurcations have been comparatively well characterized (Monfared & Durstewitz, 2020a; Avrutin et al., 2019). PLRNNs can furthermore be translated into equivalent continuous time ordinary differential equation (ODE) systems (Monfared & Durstewitz, 2020b) which comes with further advantages for analysis, e.g. continuous flow fields (Fig. 1A,B). We retain the PLRNN’s structural simplicity and analytical tractability while mitigating the exploding vs. vanishing gradient problem by adding special regularization terms for a subset of PLRNN units to the loss function. These terms are designed to push the system toward line attractor configurations, without strictly enforcing them, along some – but not all – directions in state space. We further establish a tight mathematical relationship between the PLRNN dynamics and the behavior of its gradients during training. Finally, we demonstrate that our approach outperforms LSTM and other, initialization-based, methods on a number of ‘classical’ machine learning benchmarks (Hochreiter & Schmidhuber, 1997). Much more importantly in the present DST context, we demonstrate that our new regularization-supported inference efficiently captures all relevant time scales when reconstructing challenging nonlinear DS with multiple short- and long-range phenomena. 2 RELATED WORK Dynamical systems reconstruction. From a natural science perspective, the goal of reconstructing or identifying the underlying DS is substantially more ambitious than (and different from) building a system that ‘merely’ yields good ahead predictions: In DS identification we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties (see section 3.5, Fig. S2; Kantz & Schreiber (2004)). Earlier work using RNNs for DS reconstruction (Roweis & Ghahramani, 2002; Yu et al., 2005) mainly focused on inferring the posterior over latent trajectories Z = {z1, . . . ,zT } given time series data X = {x1, . . . ,xT }, p(Z|X), and on ahead predictions (Lu et al., 2017), as does much of the recent work on variational inference of DS (Duncker et al., 2019; Zhao & Park, 2020; Hernandez et al., 2020). Although this enables insight into the dynamics along the empirically observed trajectories, both – posterior inference and good ahead predictions – do not per se guarantee that the inferred models can generate the underlying attractor geometries on their own (see Fig. S2, Koppe et al. (2019)). In contrast, if fully generative reconstruction of the underlying DS in this latter sense were achieved, formal analysis or simulation of the resulting RNN equations could provide a much deeper understanding of the dynamical mechanisms underlying empirical observations (Fig. 1 C). Some approaches geared toward this latter goal of full DS reconstruction make specific structural assumptions about the form of the DS equations (‘white box approach’; Meeds et al. (2019); Raissi (2018); Gorbach et al. (2017)), e.g. based on physical or biological domain knowledge, and focus on estimating the system’s latent states and parameters, rather than approximating an unknown DS based on the observed time series information alone (‘black box approach’). Others (Trischler & D’Eleuterio, 2016; Brunton et al., 2016; Champion et al., 2019) attempt to approximate the flow field, obtained e.g. by numerical differentiation, directly through basis expansions or neural networks. However, numerical derivatives are problematic for their high variance and other numerical issues (Raissi, 2018; Baydin et al., 2018; Chen et al., 2017). Another factor to consider is that in many biological systems like the brain the intrinsic dynamics are highly stochastic with many noise sources, like probabilistic synaptic release (Stevens, 2003). Models that do not explicitly account for dynamical process noise (Ayed et al., 2019; Champion et al., 2019; Rudy et al., 2019) are therefore less suited and more vulnerable to model misspecification. Finally, some fully probabilistic models for DS reconstruction based on GRU (Fraccaro et al., 2016), LSTM (Zheng et al., 2017; Vlachas et al., 2018), or radial basis function (Zhao & Park, 2020) networks, are not easily interpretable and amenable to DS analysis in the sense defined in sect. 3.3. Most importantly, none of these previous approaches consider the long-range dependency problem within more easily tractable RNNs for DS. Long-range dependency problems in RNNs. Error gradients in vanilla RNNs tend to either explode or vanish due to the large product of derivative terms that results from recursive application of the chain rule over time steps (Hochreiter, 1991; Bengio et al., 1994; Hochreiter & Schmidhuber, 1997). To address this issue, RNNs with gated memory cells (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) have been specifically designed, but their more complicated mathematical structure makes them less amenable to a systematic DS analysis. Even simple objects like fixed points of these systems have to be found by numerical techniques (Sussillo & Barak, 2013; Jordan et al., 2019). Thus, approaches which retain the simplicity of vanilla RNNs while solving the exploding vs. vanishing gradients problem would be desirable. Recently, Le et al. (2015) observed that initialization of the recurrent weight matrixW to the identity in ReLU-based RNNs may yield performance en par with LSTMs on standard machine learning benchmarks. Talathi & Vartak (2016) expanded on this idea by initializing the recurrence matrix such that its largest absolute eigenvalue is 1. Later work en- forced orthogonal (Henaff et al., 2016; Helfrich et al., 2018; Jing et al., 2019) or unitary (Arjovsky et al., 2016) constraints on the recurrent weight matrix during training. While this appears to yield long-term memory performance sometimes superior to that of LSTMs (but see (Henaff et al., 2016)), these networks are limited in their computational power (Kerg et al., 2019). This may be a consequence of the fact that RNNs with orthogonal recurrence matrix are quite restricted in the range of dynamical phenomena they can produce, e.g. chaotic attractors are not possible since (locally) diverging eigen-directions are disabled. Our approach therefore is to establish line/plane attractors only along some but not all directions in state space, and to only push the RNN toward these configurations but not strictly enforce them, such that convergence or (local) divergence of RNN dynamics is still possible. We furthermore implement these concepts through regularization terms in the loss functions, rather than through mere initialization. This way plane attractors are encouraged throughout training without fading away. 3 MODEL FORMULATION AND THEORETICAL ANALYSIS 3.1 BASIC MODEL FORMULATION Assume we are given two multivariate time series S = {st} and X = {xt}, one we will denote as ‘inputs’ (S) and the other as ‘outputs’ (X). In the ‘classical’ (supervised) machine learning setting, we usually wish to map S on X through a RNN with latent state equation zt = Fθ (zt−1, st) and outputs xt ∼ pλ (xt|zt), as for instance in the ‘addition problem’ (Hochreiter & Schmidhuber, 1997). In DS reconstruction, in contrast, we usually have a dense time seriesX from which we wish to infer (unsupervised) the underlying DS, where S may provide an additional forcing function or sparse experimental inputs or perturbations. While our focus in this paper is on this latter task, DS reconstruction, we will demonstrate that our approach brings benefits in both these settings. Here we consider for the latent model a PLRNN (Koppe et al., 2019) which takes the form zt = Azt−1 +Wφ(zt−1) +Cst + h+ εt, εt ∼ N (0,Σ), (1) where zt ∈ RM×1 is the hidden state (column) vector of dimensionM ,A ∈ RM×M a diagonal and W ∈ RM×M an off-diagonal matrix, st ∈ RK×1 the external input of dimension K, C ∈ RM×K the input mapping, h ∈ RM×1 a bias, and εt a Gaussian noise term with diagonal covariance matrix diag(Σ) ∈ RM+ . The nonlinearity φ(z) is a ReLU, φ(z)i = max(0, zi), i ∈ {1, . . . ,M}. This specific formulation represents a discrete-time version of firing rate (population) models as used in computational neuroscience (Song et al., 2016; Durstewitz, 2017; Engelken et al., 2020). We will assume that the latent RNN states zt are coupled to the actual observations xt through a simple observation model of the form xt = Bg(zt) + ηt, ηt ∼ N (0,Γ) (2) in the case of observations xt ∈ RN×1, whereB ∈ RN×M is a factor loading matrix, g some (usually monotonic) nonlinear transfer function (e.g., ReLU), and diag(Γ) ∈ RN+ the diagonal covariance matrix of the Gaussian observation noise, or through a softmax function in case of categorical observations xi,t ∈ {0, 1} (see Suppl. 6.1.7 for details). 3.2 REGULARIZATION APPROACH First note that by letting A = I , W = 0, and h = 0 in eq. 1, every point in z space will be a marginally stable fixed point of the system, leading it to perform a perfect integration of external inputs as in parametric working memory (Machens et al., 2005; Brody et al., 2003).1 This is similar in spirit to Le et al. (2015) who initialized RNN parameters such that it performs an identity mapping for zi,t ≥ 0. However, here 1) we use a neuroscientifically motivated network architecture (eq. 1) that enables the identity mapping across the variables’ entire support, zi,t ∈ [−∞,+∞], which we conjecture will be of advantage for establishing long short-term memory properties, 2) we encourage 1Note that this very property of marginal stability required for input integration also makes the system sensitive to noise perturbations directly on the manifold attractor. Interestingly, this property has indeed been observed experimentally for real neural integrator systems (Major et al., 2004; Mizumori & Williams, 1993). this mapping only for a subset Mreg ≤M of units (Fig. S1), leaving others free to perform arbitrary computations, and 3) we stabilize this configuration throughout training by introducing a specific L2 regularization for parameters A, W , and h in eq. 1. When embedded into a larger, (locally) convergent system, we will call this configuration more generally a manifold attractor. That way, we divide the units into two types, where the regularized units serve as a memory that tends to decay very slowly (depending on the size of the regularization term), while the remaining units maintain the flexibility to approximate any underlying DS, yet retaining the simplicity of the original PLRNN (eq. 1). Specifically, the following penalty is added to the loss function (Fig. S1): Lreg = τA Mreg∑ i=1 (Ai,i − 1)2 + τW Mreg∑ i=1 M∑ j=1 j 6=i W 2i,j + τh Mreg∑ i=1 h2i (3) (Recall from sect. 3.1 thatA is a diagonal andW is an off-diagonal matrix.) While this formulation allows us to trade off, for instance, the tendency toward a manifold attractor (A → I , h → 0) vs. the sensitivity to other units’ inputs (W → 0), for all experiments performed here a common value, τA = τW = τh = τ , was assumed for the three regularization factors. We will refer to (z1 . . . zMreg ) as the regularized (‘memory’) subsystem, and to (zMreg+1 . . . zM ) as the non-regularized (‘computational’) subsystem. Note that in the limit τ →∞ exact manifold attractors would be enforced. 3.3 THEORETICAL ANALYSIS We will now establish a tight connection between the PLRNN dynamics and its error gradients. Similar ideas appeared in Chang et al. (2019), but these authors focused only on fixed point dynamics, while here we will consider the more general case including cycles of any order. First, note that by interpretability of model eq. 1 we mean that it is easily amenable to a rigorous DS analysis: As shown in Suppl. 6.1.2, we can explicitly determine all the system’s fixed points and cycles and their stability. Moreover, as shown in Monfared & Durstewitz (2020b), we can – under certain conditions – transform the PLRNN into an equivalent continuous-time (ODE) piecewise-linear system, which brings further advantages for DS analysis. Let us rewrite eq. 1 in the form zt = F (zt−1) = (A+WDΩ(t−1))zt−1 + h := WΩ(t−1) zt−1 + h, (4) where DΩ(t−1) is the diagonal matrix of outer derivatives of the ReLU function evaluated at zt−1 (see Suppl. 6.1.2), and we ignore external inputs and noise terms for now. Starting from some initial condition z1, we can recursively develop zT as (see Suppl. 6.1.2 for more details): zT = F T−1(z1) = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (5) Likewise, for some common loss function L(A,W ,h) = ∑T t=2 Lt, we can recursively develop the derivatives w.r.t. weights wmk (and similar for components ofA and h) as ∂L ∂wmk = T∑ t=2 ∂Lt ∂zt ∂zt ∂wmk , with ∂zt ∂wmk = 1(m,k)DΩ(t−1) zt−1 (6) + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j)zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk , where 1(m,k) is an M ×M indicator matrix with a 1 for the (m, k)’th entry and 0 everywhere else. Observing that eqs. 5 and 6 contain similar product terms which determine the system’s long-term behavior, our first theorem links the PLRNN dynamics to its total error gradients: Theorem 1. Consider a PLRNN given by eq. 4, and assume that it converges to a stable fixed point, say zt∗1 := z∗1, or a k-cycle (k > 1) with the periodic points {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, for T →∞. Suppose that, for k ≥ 1 and i ∈ {0, 1, · · · , k − 1}, σmax(WΩ(t∗k−i)) = ∥∥WΩ(t∗k−i)∥∥ < 1, where WΩ(t∗k−i) denotes the Jacobian of the system at zt∗k−i and σmax indicates the largest singular value of a matrix. Then, the 2-norms of the tensors collecting all derivatives, ∥∥∂zT ∂W ∥∥ 2 ,∥∥∂zT ∂A ∥∥ 2 , ∥∥∂zT ∂h ∥∥ 2 , will be bounded from above, i.e. will not diverge for T →∞. Proof. See Suppl. sect. 6.1 (subsection 6.1.3). While Theorem 1 is a general statement about PLRNN dynamics and total gradients, our next theorem more specifically provides conditions under which Jacobians linking temporally distant states zT and zt, T t, will neither vanish nor explode in the regularized PLRNN: Theorem 2. Assume a PLRNN with matrix A + W partitioned as in Fig. S1, i.e. with the first Mreg rows corresponding to those of an M ×M identity matrix. Suppose that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1, i.e. converges to a kcycle with k ≥ 1. Then, for the full system (z1 . . . zM ), the 2-norm of the Jacobians connecting temporally distal states zT and zt will be bounded from above and below for all T > t, i.e. ∞ > ρup ≥ ∥∥∥∂zT∂zt ∥∥∥2 = ∥∥∥∏t<k≤T WΩ(k)∥∥∥2 ≥ ρlow > 0. In particular, for state variables ziT and zjt such that i ∈ {Mreg + 1, · · · ,M} and j ∈ {1, · · · ,Mreg}, i.e. that connect states from the ‘memory’ to those of the ‘computational’ subsystem, one also has∞ > λup ≥ ∣∣∣∂ziT∂zjt ∣∣∣ ≥ λlow > 0 as T − t→∞, i.e. these derivatives will never vanish nor explode. Proof. See Suppl. sect. 6.1 (subsection 6.1.4). The bounds ρup, ρlow, λup, λlow, are given in Suppl. sect. 6.1.4. We remark that when the regularization conditions are not exactly met, i.e. when parametersA andW slightly deviate from those in Fig. S1, memory (and gradients) may ultimately dissipate, but only very slowly, as actually required for temporal processes with very slow yet not infinite time constants (Fig. 1B). 3.4 TRAINING PROCEDURES For the (supervised) machine learning problems, all networks were trained by stochastic gradient descent (SGD) to minimize the squared-error loss between estimated and actual outputs for the addition and multiplication problems, and the cross entropy loss for sequential MNIST (see Suppl. 6.1.7). Adam (Kingma & Ba, 2014) from PyTorch package (Paszke et al., 2017) was used as the optimizer, with a learning rate of 0.001, gradient clip parameter of 10, and batch size of 500. SGD was stopped after 100 epochs and the fit with the lowest loss across all epochs was taken, except for LSTM which was allowed to run for up to 200 epochs as it took longer to converge (Fig. S10). For comparability, the PLRNN latent state dynamics eq. 1 was assumed to be deterministic in this setting (i.e., Σ = 0), g(zt) = zt and Γ = IN in eq. 2. For the regularized PLRNN (rPLRNN), penalty eq. 3 was added to the loss function. For the (unsupervised) DS reconstruction problems, the fully probabilistic, generative RNN eq. 1 was considered. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear state space model (Durbin & Koopman, 2012) with observation and process noise, and an Expectation-Maximization (EM) algorithm that efficiently exploits the model’s piecewise linear structure (Durstewitz, 2017; Koppe et al., 2019) was used to solve for the parameters by maximum likelihood. Details are given in Suppl. 6.1.5. All code used here will be made openly available at https://github.com/DurstewitzLab/reg-PLRNN. 3.5 PERFORMANCE MEASURES For the machine learning benchmarks we employed the same criteria as used for optimization (MSE or cross-entropy, Suppl. 6.1.7) as performance metrics, evaluated across left-out test sets. In addition, we report the relative frequency Pcorrect of correctly predicted trials across the test set (see Suppl. 6.1.7 for details). For DS reconstruction problems, it is not sufficient or even sensible to judge a method’s ability to infer the underlying DS purely based on some form of (ahead-)prediction error like the MSE defined on the time series itself (Ch.12 in Kantz & Schreiber (2004)). Rather, we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties. This is not automatically guaranteed for a model that yields agreeable ahead predictions on a time series (Fig. S2A; cf. Koppe et al. (2019); Wood (2010)). We therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between true and reproduced probability distributions across states in state space to quantify how well an inferred PLRNN captured the underlying dynamics, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) (see Suppl. 6.1.6 for more details). 4 NUMERICAL EXPERIMENTS 4.1 MACHINE LEARNING BENCHMARKS Although not our prime interest here, we first examined how the rPLRNN would fare on supervised machine learning benchmarks where inputs (S) are to be mapped onto target outputs (X) across long time spans (i.e., requiring long short-term maintenance of information), namely the addition and multiplication problems (Talathi & Vartak, 2016; Hochreiter & Schmidhuber, 1997), and sequential MNIST (LeCun et al., 2010). Details of these experimental setups are in Suppl. 6.1.7. Performance of the rPLRNN (eq. 1, eq. 3) on all 3 benchmarks was compared to several other models summarized in Suppl. Table 1. To achieve a meaningful comparison, all models have the same number M = 40 (based on Fig. S3) of hidden states (which gives LSTMs overall about 4 times as many trainable parameters). On all three problems the rPLRNN outperforms all other tested methods, including LSTM, iRNN (RNN initialized by the identity matrix as in Le et al. (2015)), and a version of the orthogonal RNN (oRNN; Vorontsov et al. (2017)) (similar results were obtained for other settings of M and batch size). LSTM performs even worse than iRNN and iPLRNN (PLRNN initialized with the identity as the iRNN), although it had 4 times as many parameters and was given twice as many epochs (and thus opportunities) for training, as it also took longer to converge (Fig. S10). In addition, the iPLRNN tends to perform slightly better than the iRNN on all three problems, suggesting that the specific structure eq. 1 of the PLRNN that allows for a manifold attractor across the variables’ full range may be advantageous to begin with, while the regularization further improves performance. 4.2 NUMERICAL EXPERIMENTS ON DYNAMICAL SYSTEMS WITH DIFFERENT TIME SCALES While it is encouraging that the rPLRNN may perform even better than several previous approaches to the vanishing vs. exploding gradients problem, our major goal here was to examine whether our regularization scheme would help with the (unsupervised) identification of DS that harbor widely different time scales. To test this, we used a biophysical, bursting cortical neuron model with one voltage (V ) and two conductance recovery variables (see Durstewitz (2009)), one slow (h) and one fast (n; Suppl. 6.1.8). Reproduction of this DS is challenging since it produces very fast spikes on top of a slow nonlinear oscillation (Fig. 3D). Only short time series (as in scientific data) of length T = 1500 from this model were provided for training. rPLRNNs with M = {8 . . . 18} states were trained, with the regularization factor varied within τ ∈ {0, 101, 102, 103, 104, 105}/T . Note that for τ = 0 (no regularization), the approach reduces to the standard PLRNN (Koppe et al., 2019). Fig. 3A confirms our intuition that stronger regularization leads to better DS reconstruction as assessed by the KL divergence between true and generated state distributions (similar results were obtained with ahead-prediction errors as a metric, Fig. S4A), accompanied by a likewise decrease in the MSE between the power spectra of true (suppl. eq. 55) and generated (rPLRNN) voltage traces (Fig. 3B). Fig. 3D gives an example of voltage traces (V ) and the slower of the two gating variables (h; see Fig. S5A for variable n) freely simulated (i.e., sampled) from the autonomously running rPLRNN. This illustrates that our model is in principle capable of capturing both the stiff spike dynamics and the slower oscillations in the second gating variable at the same time. Fig. 3C provides more insight into how the regularization worked: While the high frequency components (> 50 Hz) related to the repetitive spiking activity hardly benefited from increasing τ , there was a strong reduction in the MSE computed on the power spectrum for the lower frequency range (≤ 50 Hz), suggesting that increased regularization helps to map slowly evolving components of the dynamics. This result is more general as shown in Fig. S6 for another DS example. In contrast, an orthogonality (Vorontsov et al., 2017) or plain L2 constraint on weight matrices did not help at all on this problem (Fig. S4B). Further insight into the dynamical mechanisms by which the rPLRNN solves the problem can be obtained by examining the latent dynamics: As shown in Fig. 3E (see also Fig. S5), regularized states indeed help to map the slow components of the dynamics, while non-regularized states focus on the fast spikes. These observations further corroborate the findings in Fig. 3C and Fig. S6C. 4.3 REGULARIZATION PROPERTIES AND MANIFOLD ATTRACTORS In Figs. 2 and 3 we demonstrated that the rPLRNN is able to solve problems and reconstruct dynamics that involve long-range dependencies. Figs. 3A,B furthermore directly confirm that solutions improve with stronger regularization, while Figs. 3C,E give insight into the mechanism by which the regularization works. To further verify empirically that our specific form of regularization, eq. 3, is important, Fig. 2 also shows results for a PLRNN with standard L2 norm on a fraction of Mreg/M = 0.5 states (L2pPLRNN). Fig. S7 provides additional results for PLRNNs with L2 norm on all weights and for vanilla L2-regularized RNNs. All these systems fell far behind the performance of the rPLRNN on all tasks tested. Moreover, Fig. 4 reveals that the specific regularization proposed indeed encourages manifold attractors, and that this is not achieved by a standard L2 regularization: In contrast to L2PLRNN, as the regularization factor τ is increased, more and more of the maximum absolute eigenvalues around the system’s fixed points (computed according to eq. 8, sect. 6.1.2) cluster on or near 1, indicating directions of marginal stability in state space. Also, the deviations from 1 become smaller for strongly regularized PLRNNs (Fig. 4B,D), indicating a higher precision in attractor tuning. Fig. S9 in addition confirms that rPLRNN parameters are increasingly driven toward values that would support manifold attractors with stronger regularization. Fig. 3E furthermore suggests that both regularized and non-regularized states are utilized to map the full dynamics. But how should the ratio Mreg/M be chosen in practice? While for the problems here this meta-parameter was determined through ‘classical’ grid-search and cross-validation, Figs. S3 C – E suggest that the precise setting of Mreg/M is actually not overly important: Nearly optimal performance is achieved for a broader range Mreg/M ∈ [0.3, 0.6] on all problems tested. Hence, in practice, setting Mreg/M = 0.5 should mostly work fine. 5 CONCLUSIONS In this work we introduced a simple solution to the long short-term memory problem in RNNs that retains the simplicity and tractability of PLRNNs, yet does not curtail their universal computational capabilities (Koiran et al., 1994; Siegelmann & Sontag, 1995) and their ability to approximate arbitrary DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Trischler & D’Eleuterio, 2016). We achieved this by adding regularization terms to the loss function that encourage the system to form a ‘memory subspace’ (Seung, 1996; Durstewitz, 2003) which would store arbitrary values for, if unperturbed, arbitrarily long periods. At the same time we did not rigorously enforce this constraint, which allowed the system to capture slow time scales by slightly departing from a perfect manifold attractor. In neuroscience, this has been discussed as a dynamical mechanism for regulating the speed of flow in DS and learning of arbitrary time constants not naturally included qua RNN design (Durstewitz, 2003; 2004) (Fig. 1B). While other RNN architectures, including vanilla RNNs, can, in principle, also develop line attractors to solve specific tasks (Maheswaranathan et al., 2019), they are generally much harder to train to achieve this and may exhibit less precise attractor tuning (cf. Fig. 4), which is needed to bridge long time scales (Durstewitz, 2003). Moreover, part of the PLRNN’s latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics (see also Fig. S11 for a chaotic example). We showed that the rPLRNN is en par with or outperforms initialization-based approaches, orthogonal RNNs, and LSTMs on a number of classical benchmarks. More importantly, however, the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction. Similar regularization schemes as proposed here (eq. 3) may, in principle, also be designed for other architectures, but the convenient mathematical form of the PLRNN makes their implementation particularly powerful and straightforward. ACKNOWLEDGEMENTS This work was funded by grants from the German Research Foundation (DFG) to DD (Du 354/10-1, Du 354/8-2 within SPP 1665) and to GK (TRR265: A06 & B08), and under Germany’s Excellence Strategy – EXC-2181 – 390900948 (’Structures’). 6 APPENDIX 6.1 SUPPLEMENTARY TEXT 6.1.1 Simple exact PLRNN solution for addition problem The exact PLRNN parameter settings (cf. eq. 1, eq. 2) for solving the addition problem with 2 units (cf. Fig. 1C) are as follows: A = ( 1 0 0 0 ) ,W = ( 0 1 0 0 ) ,h = ( 0 −1 ) ,C = ( 0 0 1 1 ) ,B = (1 0) (7) 6.1.2 Computation of fixed points and cycles in PLRNN Consider the PLRNN in the form of eq. 4. For clarity, let us define dΩ(t) := (d1, d2, · · · , dM ) as an indicator vector with dm(zm,t) := dm = 1 for all states zm,t > 0 and zeros otherwise, and DΩ(t) := diag(dΩ(t)) as the diagonal matrix formed from this vector. Note that there are at most 2M distinct matricesWΩ(t) as defined in eq. 4, depending on the sign of the components of zt. If h = 0 and WΩ(t) is the identity matrix, then the map F becomes the identity map and so every point z will be a fixed point of F . Otherwise, the fixed points of F can be found solving the equation F (z∗1) = z∗1 as z∗1 = (I −WΩ(t∗1))−1 h = H∗1 h, (8) where z∗1 = zt∗1 = zt∗1−1, if det(I − WΩ(t∗1)) = PWΩ(t∗1)(1) 6= 0, i.e. WΩ(t∗1) has no eigenvalue equal to 1. Stability and type of fixed points (node, saddle, spiral) can then be determined from the eigenvalues of the JacobianA+WDΩ(t∗1) = WΩ(t∗1) (Strogatz (2015)). For k > 1, solving F k(z∗k) = z∗k, one can obtain a k-cycle of the map F with the periodic points {z∗k, F (z∗k), F 2(z∗k), · · · , F k−1(z∗k)}. For this, we first compute F k as follows: zt = F (zt−1) = WΩ(t−1) zt−1 + h, zt+1 = F 2(zt−1) = F (zt) = WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t) + I ) h, zt+2 = F 3(zt−1) = F (zt+1) = WΩ(t+1)WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t+1)WΩ(t) +WΩ(t+1) + I ) h, ... zt+(k−1) = F k(zt−1) = k+1∏ i=2 WΩ(t+(k−i)) zt−1 + [ k∑ j=2 k−j+2∏ i=2 WΩ(t+(k−i)) + I ] h, (9) in which ∏k+1 i=2 WΩ(t+(k−i)) = WΩ(t+(k−2))WΩ(t+(k−3)) · · · WΩ(t−1). Assuming t+(k−1) := t∗k, then the k-cycle is given by the fixed point of the k-times iterated map F k as z∗k = ( I − k∏ i=1 WΩ(t∗k−i) )−1 [ k∑ j=2 k−j+1∏ i=1 WΩ(t∗k−i) + I ] h = H∗k h, (10) where z∗k = zt∗k = zt∗k−k, provided that I − ∏k i=1WΩ(t∗k−i) is invertible. That is det ( I − ∏k i=1WΩ(t∗k−i) ) = P∏k i=1WΩ(t∗k−i) (1) 6= 0 and ∏k i=1WΩ(t∗k−i) := WΩ∗k has no eigenvalue equal to 1. As for the fixed points, we can determine stability of the k-cycle from the eigenvalues of the Jacobians ∏k i=1WΩ(t∗k−i). It may also be helpful to spell out the recursions in eq. 5 and eq. 6 in section 3.3 in a bit more detail. Analogously to the derivations above, for t = 1, 2, . . . , T we can recursively compute z2, z3, . . . ,zT (T ∈ N) as z2 = F (z1) = WΩ(1) z1 + h, z3 = F 2(z1) = F (z2) = WΩ(2)WΩ(1) z1 + ( WΩ(2) + I ) h, ... zT = F T−1(z1) = F (zT−1) = WΩ(T−1)WΩ(T−2) · · ·WΩ(1) z1 + ( WΩ(T−1)WΩ(T−2) · · ·WΩ(2) +WΩ(T−1)WΩ(T−2) · · ·WΩ(3) + · · ·+WΩ(T−1) + I ) h = T−1∏ i=1 WΩ(T−i) z1 + [ T−2∑ j=1 T−j−1∏ i=1 WΩ(T−i) + I ] h = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (11) Likewise, we can write out the derivatives eq. 6 more explicitly as ∂zt ∂wmk = ∂F (zt−1) ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) )∂zt−1 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2) zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )∂zt−2 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2)zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) ) 1(m,k)DΩ(t−3)zt−3 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )( A+WDΩ(t−3) )∂zt−3 ∂wmk = · · · = 1(m,k)DΩ(t−1) zt−1 + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j) zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk (12) where ∂z2∂wmk = ( ∂z1,2 ∂wmk · · · ∂zM,2∂wmk ) with ∂zl,2 ∂wmk = 0∀ l 6= m and ∂zm,2∂wmk = dkzk,1. The derivatives w.r.t. the elements ofA and h can be expanded in a similar way, only that the termsDΩ(t) zt on the last line of eq. 12 need to be replaced by just zt for ∂zt∂amm , and by just a vector of 1’s for ∂zt ∂hm (also, in these cases, the indicator matrix will be the diagonal matrix 1(m,m)). 6.1.3 Proof of Theorem 1 To state the proof, let us rewrite the derivatives of the loss function L(W ,A,h) = ∑T t=1 Lt in the following tensor form: ∂L ∂W = T∑ t=1 ∂Lt ∂W , where ∂Lt ∂W = ∂Lt ∂zt ∂zt ∂W , (13) for which the 3D tensor ∂zt ∂W = ∂z1,t ∂W ∂z2,t ∂W ... ∂zM,t ∂W (14) of dimension M ×M ×M , consists of all the gradient matrices ∂zi,t ∂W = ∂zi,t ∂w11 ∂zi,t ∂w12 · · · ∂zi,t∂w1M ∂zi,t ∂w21 ∂zi,t ∂w22 · · · ∂zi,t∂w2M ... ∂zi,t ∂wM1 ∂zi,t ∂wM2 · · · ∂zi,t∂wMM := ∂zi,t ∂w1∗ ∂zi,t ∂w2∗ ... ∂zi,t ∂wM∗ , i = 1, 2, · · · ,M, (15) where wi∗ ∈ RM is a row-vector. Now, suppose that {z1, z2, z3, . . .} is an orbit of the system which converges to a stable fixed point, i.e. lim T→∞ zT = z ∗k. Then lim T→∞ zT = lim T→∞ ( WΩ(T−1) zT−1 + h ) = z∗1 = WΩ(t∗1) z ∗1 + h, (16) and so lim T→∞ ( WΩ(T−1) ) z∗1 = WΩ(t∗1) z ∗1. (17) Assume that lim T→∞ ( WΩ(T−1) ) = L. Since eq. 17 holds for every z∗1, then substituting z∗1 = eT1 = (1, 0, · · · , 0)T in eq. 17, we can prove that the first column of L equals the first column of WΩ(t∗1). Performing the same procedure for z∗1 = eTi , i = 2, 3, · · · ,M , yields lim T→∞ WΩ(T−1) = WΩ(t∗1). (18) Also, for every i ∈ N (1 < i <∞) lim T→∞ WΩ(T−i) = WΩ(t∗1), (19) i.e. ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ ≤ . (20) Thus, ∥∥WΩ(T−i)∥∥− ∥∥WΩ(t∗1)∥∥ ≤ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ gives ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ . (21) Since T − 1 > T − 2 > · · · > T − i ≥ N , so ∀ > 0 ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ , i = 1, 2, · · · , T −N. (22) Hence ∀ > 0 ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ T−N∏ i=1 ∥∥WΩ(T−i)∥∥ ≤ (∥∥WΩ(t∗1)∥∥+ )T−N . (23) If ∥∥WΩ(t∗1)∥∥ < 1, then for any < 1, considering ̄ ≤ +‖WΩ(t∗1)‖2 < 1, it is concluded that∥∥∥∥∥ limT→∞ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ = limT→∞ ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ limT→∞(∥∥WΩ(t∗1)∥∥+ ̄)T−N = 0. (24) Therefore lim T→∞ T−1∏ i=1 WΩ(T−i) = 0. (25) If the orbit {z1, z2, z3, . . .} tends to a stable k-cycle (k > 1) with the periodic points {F k(z∗k), F k−1(z∗k), F k−2(z∗k), · · · , F (z∗k)} = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, then, denoting the stable k-cycle by Γk = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1), zt∗k , zt∗k−1, · · · , zt∗k−(k−1), · · · }, (26) we have lim T→∞ d(zT ,Γk) = 0. (27) Hence, there exists a neighborhood U of Γk and k sub-sequences {zTkn}∞n=1, {zTkn+1}∞n=1, · · · , {zTkn+(k−1)}∞n=1 of the sequence {zT }∞T=1 such that these sub-sequences belong to U and (i) zTkn+s = F k(zTk(n−1)+s), s = 0, 1, 2, · · · , k − 1, (ii) lim T→∞ zTkn+s = zt∗k−s, s = 0, 1, 2, · · · , k − 1, (iii) for every zT ∈ U there is some s ∈ {0, 1, 2, · · · , k − 1} such that zT ∈ {zTkn+s}∞n=1. In this case, for every zT ∈ U with zT ∈ {zTkn+s}∞n=1 we have lim T→∞ zT = zt∗k−s for some s = 0, 1, 2, · · · , k − 1. Therefore, continuity of F implies that lim T→∞ F (zT ) = F (zt∗k−s) and so lim T→∞ ( WΩ(T ) zT + h ) = WΩ(t∗k−s) zt∗k−s + h. (28) Thus, similarly, we can prove that ∃ s ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T ) = WΩ(t∗k−s). (29) Analogously, for every i ∈ N (1 < i <∞) ∃ si ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T−i) = WΩ(t∗k−si), (30) On the other hand, ∥∥WΩ(t∗k−si)∥∥ < 1 for all si ∈ {0, 1, 2, · · · , k − 1}. So, without loss of generality, assuming max 0≤si≤k−1 {∥∥WΩ(t∗k−si)∥∥} = ∥∥WΩ(t∗k)∥∥ < 1, (31) we can again obtain some relations similar to eq. 23-eq. 25 for t∗k, k ≥ 1. Since {zT−1}∞T=1 is a convergent sequence, so it is bounded, i.e. there exists a real number q > 0 such that ||zT−1|| ≤ q for all T ∈ N. Furthermore, ∥∥DΩ(T−1)∥∥ ≤ 1 for all T . Therefore, by eq. 12 and eq. 23 (for t∗k, k ≥ 1)∥∥∥∥ ∂zT∂wmk ∥∥∥∥ = ∣∣∣∣∣ ∣∣∣∣∣1(m,k)DΩ(T−1) zT−1 + T−1∑ j=2 ( j−1∏ i=1 WΩ(T−i) ) 1(m,k)DΩ(T−j) zT−j + T−1∏ i=1 WΩ(T−i) DΩ(1) z1 ∣∣∣∣∣ ∣∣∣∣∣ (32) ≤ ‖zT−1‖+ [ T−1∑ j=2 ∥∥∥∥∥ j−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖zT−j‖ ] + ∥∥∥∥∥ T−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖z1‖ ≤ q ( 1 + T−1∑ j=2 (∥∥WΩ(t∗k)∥∥+ ̄)j−1 )+ (∥∥WΩ(t∗k)∥∥+ ̄)T−1 ‖z1‖ . (33) Thus, by ∥∥WΩ(t∗k)∥∥+ ̄ < 1, we have lim T→∞ ∥∥∥∥ ∂zT∂wmk ∥∥∥∥ ≤ q(1 + ∥∥WΩ(t∗k)∥∥+ ̄ 1− ∥∥WΩ(t∗k)∥∥− ̄ ) =M <∞, (34) i.e., by eq. 14 and eq. 15, the 2-norm of total gradient matrices and hence ∥∥ ∂zt ∂W ∥∥ 2 will not diverge (explode) under the assumptions of Theorem 1. Analogously, we can prove that ∥∥∂zT ∂A ∥∥ 2 and ∥∥∂zT ∂h ∥∥ 2 will not diverge either. Since, similar as in the derivations above, it can be shown that relation eq. 34 is true for ∥∥∥ ∂zT∂amm ∥∥∥ with q = q̄, where q̄ is the upper bound of ‖zT ‖, as {zT }∞T=1 is convergent. Furthermore, relation eq. 34 also holds for∥∥∥ ∂zT∂hm ∥∥∥ with q = 1. Remark 2.1. By eq. 24 the Jacobian parts ∥∥∥∂zT∂zt ∥∥∥2 connecting any two states zT and zt, T > t, will not diverge either. Corollary 2.1. The results of Theorem 1 are also true ifWΩ(t∗k) is a normal matrix with no eigenvalue equal to one. Proof. If WΩ(t∗k) is normal, then ∥∥WΩ(t∗k)∥∥ = ρ(WΩ(t∗k)) < 1 which satisfies the conditions of Theorem 1. 6.1.4 Proof of Theorem 2 LetA,W andDΩ(k), t < k ≤ T , be partitioned as follows A = ( Ireg O T O Anreg ) , W = ( Oreg O T S Wnreg ) , DΩ(k) = ( Dkreg O T O Dknreg ) , (35) where IMreg×Mreg := Ireg ∈ RMreg×Mreg ,OMreg×Mreg := Oreg ∈ RMreg×Mreg , O,S ∈ R(M−Mreg)×Mreg , A{Mreg+1:M,Mreg+1:M} := Anreg ∈ R(M−Mreg)×(M−Mreg) is a diagonal submatrix,W{Mreg+1:M,Mreg+1:M} := Wnreg ∈ R(M−Mreg)×(M−Mreg) is an off-diagonal sub-matrix (cf. Fig. S1). Moreover, DkMreg×Mreg := D k reg ∈ RMreg×Mreg and Dk{Mreg+1:M,Mreg+1:M} := Dknreg ∈ R(M−Mreg)×(M−Mreg) are diagonal sub-matrices. Then, we have ∏ t<k≤T WΩ(k) = ∏ t<k≤T ( Ireg O T SDkreg Anreg +WnregD k nreg ) := ∏ t<k≤T ( Ireg O T SDkreg W k nreg ) = ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg. ) (36) Therefore, considering the 2-norm, we obtain∥∥∥∥∂zT∂zt ∥∥∥∥ = ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∥ ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg )∥∥∥∥∥ <∞. (37) Moreover 1 ≤ max{1, ρ(WT−t)} = ρ ( ∏ t<k≤T WΩ(k) ) ≤ ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∂zT∂zt ∥∥∥∥ (38) where WT−t := ∏ t<k≤T W k nreg . Therefore, eq. 37 and eq. 38 yield 1 ≤ ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ ≤ ρup <∞. Furthermore, we assumed that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1. Hence, similar to the proof of Theorem 1, it is concluded that lim T→∞ T∏ k=t W knreg = Onreg. (39) On the other hand, by definition ofDΩ(k), for every t < k ≤ T , we have ∥∥Dkreg∥∥ ≤ 1 and so∥∥SDkreg∥∥ ≤ ‖S‖ ∥∥Dkreg∥∥ ≤ ‖S‖ , (40) which, in accordance with the the assumptions of Theorem 1, by convergence of∑T j=2 ∏t+j−1 k=t+1 ∥∥W knreg∥∥ implies lim T→∞ ∥∥∥∥∥∥SDt+1reg + T∑ j=2 ( t+j−1∏ k=t+1 W knreg ) SDt+jreg ∥∥∥∥∥∥ ≤ ‖S‖ ( 1 + lim T→∞ T∑ j=2 t+j−1∏ k=t+1 ∥∥W knreg∥∥) ≤ ‖S‖Mnreg. (41) Thus, denoting Q := SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg SD t+j reg ) , from eq. 41 we deduce that λmax ( lim T→∞ (QTQ) ) = lim T→∞ ρ(QTQ) ≤ lim T→∞ ∥∥QTQ∥∥ = lim T→∞ ‖Q‖2 ≤ ( ‖S‖Mnreg )2 . (42) Now, if T − t tends to∞, then eq. 37, eq. 39 and eq. 42 result in 1 = ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ = σmax( ( Ireg O T Q Onreg )) = √ λmax(Ireg + lim T→∞ (QTQ)) = ρup < ∞. (43) Remark 2.2. If ‖S‖ = 0, then ∥∥∥∂zT∂zt ∥∥∥→ 1 as T − t→∞. 6.1.5 Details on EM algorithm and DS reconstruction For DS reconstruction we request that the latent RNN approximates the true generating system of equations, which is a taller order than learning the mapping S → X or predicting future values in a time series (cf. sect. 3.5).2 This point has important implications for the design of models, inference algorithms and performance metrics if the primary goal is DS reconstruction rather than ‘mere’ time series forecasting.3 In this context we consider the fully probabilistic, generative RNN eq. 1. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear 2By reconstructing the governing equations we mean their approximation in the sense of the universal approximation theorems for DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998), i.e. such that the behavior of the reconstructed system becomes dynamically equivalent to that of the true underlying system. 3In this context we also remark that models which include longer histories of hidden activations (Yu et al., 2019), as in many statistical time series models (Fan & Yao, 2003), are not formally valid DS models anymore since they violate the uniqueness of flow in state space (Strogatz, 2015). state space model (Durbin & Koopman, 2012) with observation and process noise. We solve for the parameters θ = {A,W ,C,h,µ0,Σ,B,Γ} by maximum likelihood, for which an efficient Expectation-Maximization (EM) algorithm has recently been suggested (Durstewitz, 2017; Koppe et al., 2019), which we will summarize here. Since the involved integrals are not tractable, we start off from the evidence-lower bound (ELBO) to the log-likelihood which can be rewritten in various useful ways: log p(X|θ) ≥ EZ∼q[log pθ(X,Z)] +H (q(Z|X)) = log p(X|θ)−DKL (q(Z|X)‖pθ(Z|X)) =: L (θ, q) (44) In the E-step, given a current estimate θ∗ for the parameters, we seek to determine the posterior pθ (Z|X) which we approximate by a global Gaussian q(Z|X) instantiated by the maximizer (mode) Z∗ of pθ(Z|X) as an estimator of the mean, and the negative inverse Hessian around this maximizer as an estimator of the state covariance, i.e. E[Z|X] ≈ Z∗ = arg max Z log pθ(Z|X) = arg max Z [log pθ(X|Z) + log pθ(Z)− log pθ(X)] = arg max Z [log pθ(X|Z) + log pθ(Z)] , (45) since Z integrates out in pθ(X) (equivalently, this result can be derived from a Laplace approximation to the log-likelihood, log p(X|θ) ≈ log pθ(X|Z∗)+log pθ(Z∗)− 12 log |−L ∗|+const, where L∗ is the Hessian evaluated at the maximizer). We solve this optimization problem by a fixed-point iteration scheme that efficiently exploits the model’s piecewise linear structure, as detailed below. Using this approximate posterior for pθ(Z|X), based on the model’s piecewise-linear structure most of the expectation values Ez∼q [φ(z)], Ez∼q [ φ(z)zT ] , and Ez∼q [ φ(z)φ(z)T ] , could be solved for (semi-)analytically (where z is the concatenated vector form of Z, see below). In the M-step, we seek θ∗ := arg maxθ L(θ, q∗), assuming proposal density q∗ to be given from the E-step, which for a Gaussian observation model amounts to a simple linear regression problem (see Suppl. eq. 49). To force the PLRNN to really capture the underlying DS in its governing equations, we use a previously suggested (Koppe et al., 2019) stepwise annealing protocol that gradually shifts the burden of fitting the observationsX from the observation model eq. 2 to the latent RNN model eq. 1 during training, the idea of which is to establish a mapping from latent states Z to observations X first, fixing this, and then enforcing the temporal consistency constraints implied by eq. 1 while accounting for the actual observations. Now we briefly outline the fixed-point-iteration algorithm for solving the maximization problem in eq. 45 (for more details see Durstewitz (2017); Koppe et al. (2019)). Given a Gaussian latent PLRNN and a Gaussian observation model, the joint density p(X,Z) will be piecewise Gaussian, hence eq. 45 piecewise quadratic in Z. Let us concatenate all state variables across m and t into one long column vector z = (z1,1, . . . , zM,1, . . . , z1,T , . . . , zM,T ) T, arrange matrices A, W into large MT ×MT block tri-diagonal matrices, define dΩ := ( 1z1,1>0,1z2,1>0, . . . ,1zM,T>0 )T as an indicator vector with a 1 for all states zm,t > 0 and zeros otherwise, and DΩ := diag(dΩ) as the diagonal matrix formed from this vector. Collecting all terms quadratic, linear, or constant in z, we can then write down the optimization criterion in the form Q∗Ω(z) = − 1 2 [zT ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ ) z − zT (v0 +DΩv1)− (v0 +DΩv1)T z] + const. (46) In essence, the algorithm now iterates between the two steps: 1. Given fixedDΩ, solve z∗ = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 · (v0 +DΩv1) (47) 2. Given fixed z∗, recomputeDΩ until either convergence or one of several stopping criteria (partly likelihood-based, partly to avoid loops) is reached. The solution may afterwards be refined by one quadratic programming step. Numerical experiments showed this algorithm to be very fast and efficient (Durstewitz, 2017; Koppe et al., 2019). At z∗, an estimate of the state covariance is then obtained as the inverse negative Hessian, V = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 . (48) In the M-step, using the proposal density q∗ from the E-step, the solution to the maximization problem θ∗ := arg max θ L(θ, q∗), can generally be expressed in the form θ∗ = (∑ t E [ αtβ T t ])(∑ t E [ βtβ T t ])−1 , (49) where, for the latent model, eq. 1, αt = zt and βt := [ zTt−1, φ(zt−1) T, sTt , 1 ]T ∈ R2M+K+1, and for the observation model, eq. 2, αt = xt and βt = g (zt). 6.1.6 More details on DS performance measure As argued before (Koppe et al., 2019; Wood, 2010), in DS reconstruction we require that the RNN captures the underlying attractor geometries and state space properties. This does not necessarily entail that the reconstructed system could predict future time series observations more than a few time steps ahead, and vice versa. For instance, if the underlying attractor is chaotic, even if we had the exact true system available, with a tiny bit of noise trajectories starting from the same initial condition will quickly diverge and ahead-prediction errors become essentially meaningless as a DS performance metric (Fig. S2B). To quantify how well an inferred PLRNN captured the underlying dynamics we therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between the true and reproduced probability distributions across states in state space, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) rather than in precise matching of time series, DKL (ptrue(x)‖pgen(x|z)) ≈ K∑ k=1 p̂ (k) true(x) log ( p̂ (k) true(x) p̂ (k) gen(x|z) ) , (50) where ptrue(x) is the true distribution of observations across state space (not time!), pgen(x|z) is the distribution of observations generated by running the inferred PLRNN, and the sum indicates a spatial discretization (binning) of the observed state space. We emphasize that p̂(k)gen(x|z) is obtained from freely simulated trajectories, i.e. drawn from the prior p̂(z) specified by eq. 1, not from the inferred posteriors p̂(z|xtrain). In addition, to assess reproduction of time scales by the inferred PLRNN, the average MSE between the power spectra of the true and generated time series was computed, as displayed in Fig. 3B–C. The measure DKL introduced above only works for situations where the ground truth ptrue(X) is known. Following Koppe et al. (2019), we next briefly indicate how a proxy for DKL may be obtained in empirical situations where no ground truth is available. Reasoning that for a well reconstructed DS the inferred posterior pinf(z|x) given the observations should be a good representative of the prior generative dynamics pgen(z), one may use the Kullback-Leibler divergence between the distribution over latent states, obtained by sampling from the prior density pgen(z), and the (dataconstrained) posterior distribution pinf(z|x) (where z ∈ RM×1 and x ∈ RN×1), taken across the system’s state space: DKL (pinf(z|x)‖pgen(z)) = ∫ z∈RM×1 pinf(z|x) log pinf(z|x) pgen(z) dz (51) As evaluating this integral is difficult, one could further approximate pinf(z|x) and pgen(z) by Gaussian mixtures across trajectories, i.e. pinf(z|x) ≈ 1T ∑T t=1 p(zt|x1:T ) and pgen(z) ≈ 1 L ∑L l=1 p(zl|zl−1), where the mean and covariance of p(zt|x1:T ) and p(zl|zl−1) are obtained by marginalizing over the multivariate distributions p(Z|X) and pgen(Z), respectively, yielding E[zt|x1:T ], E[zl|zl−1], and covariance matrices Var(zt|x1:T ) and Var(zl|zl−1). Supplementary eq. 51 may then be numerically approximated through Monte Carlo sampling (Hershey & Olsen, 2007) by DKL (pinf(z|x)‖pgen(z)) ≈ 1 n n∑ i=1 log pinf(z (i)|x) pgen(z(i)) , z(i) ∼ pinf(z|x) (52) Alternatively, there is also a variational approximation of eq. 51 available (Hershey & Olsen, 2007): DvariationalKL (pinf(z|x)‖pgen(z)) ≈ 1 T T∑ t=1 log ∑T j=1 e −DKL(p(zt|x1:T )‖p(zj |x1:T ))∑T k=1 e −DKL(p(zt|x1:T )‖p(zk|zk−1)) , (53) where the KL divergences in the exponentials are among Gaussians for which we have an analytical expression. 6.1.7 More details on benchmark tasks and model comparisons We compared the performance of our rPLRNN to the other models summarized in Suppl. Table 1 on the following three benchmarks requiring long short-term maintenance of information (Talathi & Vartak (2016); Hochreiter & Schmidhuber (1997)): 1) The addition problem of time length T consists of 100 000 training and 10 000 test samples of 2× T input series S = {s1, . . . , sT }, where entries s1,: ∈ [0, 1] are drawn from a uniform random distribution and s2,: ∈ {0, 1} contains zeros except for two indicator bits placed randomly at times t1 < 10 and t2 < T/2. Constraints on t1 and t2 are chosen such that every trial requires a long memory of at least T/2 time steps. At the last time step T , the target output of the network is the sum of the two inputs in s1,: indicated by the 1-entries in s2,:, x target T = s1,t1 + s1,t2 . 2) The multiplication problem is the same as the addition problem, only that the product instead of the sum has to be produced by the RNN as an output at time T , xtargetT = s1,t1 · s1,t2 . 3) The MNIST dataset (LeCun et al., 2010) consists of 60 000 training and 10 000 28 × 28 test images of hand written digits. To make this a time series problem, in sequential MNIST the images are presented sequentially, pixel-by-pixel, scanning lines from upper left to bottom-right, resulting in time series of fixed length T = 784. For training on the addition and multiplication problems, the mean squared-error loss across R samples, L = 1R ∑R n=1 ( x̂ (n) T − x (n) T )2 , between estimated and actual outputs was used, while the cross-entropy loss L = ∑R n=1 ( − ∑10 i=1 x (n) i,T log(p̂ (n) i,T ) ) was employed for sequential MNIST, where p̂i,t := p̂t (xi,t = 1|zt) = ( eBi,:zt ) N∑ j=1 eBj,:zt −1 , (54) with xi,t ∈ {0, 1}, ∑ i xi,t = 1. We remark that as long as the observation model takes the form of a generalized linear model (Fahrmeir & Tutz, 2001), as assumed here, meaning may be assigned to the latent states zm by virtue of their association with specific sets of observations xn through the factor loading matrix B. This adds another layer of model interpretability (besides its accessibility in DS terms). The large error bars in Fig. 2 at the transition from good to bad performance result from the fact that the networks mostly learn these tasks in an all-or-none fashion. While the rPLRNN in general outperformed the pure initialization-based models (iRNN, npRNN, iPLRNN), confirming that a manifold attractor subspace present at initialization may be lost throughout training, we conjecture that this difference in performance will become even more pronounced as noise levels or task complexity increase. 6.1.8 More details on single neuron model The neuron model used in section 4.2 is described by −CmV̇ = gL(V − EL) + gNam∞(V )(V − ENa) + gKn(V − EK) + gMh(V − EK) + gNMDAσ(V )(V − ENMDA) (55) ḣ = h∞(V )− h
1. What is the focus and contribution of the paper on PLRNN? 2. What are the strengths of the proposed approach, particularly its novelty and theoretical connections? 3. Do you have any concerns regarding the paper, such as jumpy explanations or lack of definitions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for additional experiments or visualizations to better understand the proposed model and its advantages?
Review
Review The paper proposes a novel regularization term to PLRNN. PLRNN has nice numerical properties given its simple mathematical structure, but is able to capture complicate dynamics. It's also easy to establish a theoretical connection between PLRNN dynamics and the behavior of its gradients, which is nice. Given such a dynamic model, they design novel L2 terms pushing partial parameters towards zero, which leads to a line/plane attractor that allows slow time constants for long short-term memory. I think the idea is quite novel and interesting. A few concerns: It might be a bit jumpy for readers outside neuroscience to establish the connection between working memory and short-term memory. I would say making this point more explicit in the intro would be helpful. Lack of definition of G at the beginning of 3.1. It would be helpful to point out that eq 1 is PLRNN (with full name spell spelled out). Section 4.1 is a bit confusing to me. It's unclear that why the higher the mse/cross entropy is, the better is the model should be; also the lower the Pcorrect value is, the better the model is. Moreover, I think it's still necessary to give the definition of all the model names included in the main text. Some are missing, e.g. iPLRNN, oRNN, maybe just very briefly. Just expect that not all readers would read the appendix. The paper seems to care more about "interpretable", which is not clearly reflected in the paper. Figure 3 only shows the reconstruction. But it would be more interesting to visualize the latents of this neuron model's dynamic. It's mentioned that M={8,...,18} states were trained. How do they look like? Some are more line-attractors and some are more related to the fast spiking dynamics? Which state number is finally picked? Figure 3 only shows how influential \tao is. but there should be other insightful ablation studies to be done in order to understand the regularization term as well, i.e. how many latent states, how to split the two types, etc. It would also be helpful to visualize the reconstruction and the latents for other RNN/LSTM models for the neuron model as well. That helps to show the advantage of the proposed model more clearly. In sum, I think the idea is quite interesting and practically useful. I also appreciate the theoretical analysis. But the experiment section is confusing by missing some explanations. And the presentation of the neuron model is not sufficient enough to prove that rPLRNN find interesting and interpretable dynamics. That's why I think it's a bit below a good paper.
ICLR
Title Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies Abstract A main theoretical interest in biology and physics is to identify the nonlinear dynamical system (DS) that generated observed time series. Recurrent Neural Networks (RNNs) are, in principle, powerful enough to approximate any underlying DS, but in their vanilla form suffer from the exploding vs. vanishing gradients problem. Previous attempts to alleviate this problem resulted either in more complicated, mathematically less tractable RNN architectures, or strongly limited the dynamical expressiveness of the RNN. Here we address this issue by suggesting a simple regularization scheme for vanilla RNNs with ReLU activation which enables them to solve long-range dependency problems and express slow time scales, while retaining a simple mathematical structure which makes their DS properties partly analytically accessible. We prove two theorems that establish a tight connection between the regularized RNN dynamics and its gradients, illustrate on DS benchmarks that our regularization approach strongly eases the reconstruction of DS which harbor widely differing time scales, and show that our method is also en par with other long-range architectures like LSTMs on several tasks. 1 INTRODUCTION Theories in the natural sciences are often formulated in terms of sets of stochastic differential or difference equations, i.e. as stochastic dynamical systems (DS). Such systems exhibit a range of common phenomena, like (limit) cycles, chaotic attractors, or specific bifurcations, which are the subject of nonlinear dynamical systems theory (DST; Strogatz (2015); Ott (2002)). A long-standing desire is to retrieve the generating dynamical equations directly from observed time series data (Kantz & Schreiber, 2004), and thus to ‘automatize’ the laborious process of scientific theory building to some degree. A variety of machine and deep learning methodologies toward this goal have been introduced in recent years (Chen et al., 2017; Champion et al., 2019; Ayed et al., 2019; Koppe et al., 2019; Hamilton et al., 2017; Razaghi & Paninski, 2019; Hernandez et al., 2020). Often these are based on sufficiently expressive series expansions for approximating the unknown system of generative equations, such as polynomial basis expansions (Brunton et al., 2016; Champion et al., 2019) or recurrent neural networks (RNNs) (Vlachas et al., 2018; Hernandez et al., 2020; Durstewitz, 2017; Koppe et al., 2019). Formally, RNNs are (usually discrete-time) nonlinear DS that are dynamically universal in the sense that they can approximate to arbitrary precision the flow field of any other DS on compact sets of the real space (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Hanson & Raginsky, 2020). Hence, RNNs seem like a good choice for reconstructing – in this sense of dynamically equivalent behavior – the set of governing equations underlying real time series data. However, RNNs in their vanilla form suffer from the ‘vanishing or exploding gradients’ problem (Hochreiter & Schmidhuber, 1997; Bengio et al., 1994): During training, error gradients tend to either exponentially explode or decay away across successive time steps, and hence vanilla RNNs face severe problems in capturing long time scales or long-range dependencies in the data. Specially designed RNN architectures equipped with gating mechanisms and linear memory cells have been proposed for mitigating this issue (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). However, from a DST perspective, simpler models that can be more easily analyzed and interpreted in DS 1Department of Theoretical Neuroscience, 2Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University 3Faculty of Physics and Astronomy, Heidelberg University & Bernstein Center Computational Neuroscience ∗These authors contributed equally †Corresponding author: daniel.durstewitz@zi-mannheim.de terms (Monfared & Durstewitz, 2020a;b), and for which more efficient inference algorithms exist that emphasize approximation of the true underlying DS (Koppe et al., 2019; Hernandez et al., 2020; Zhao & Park, 2020), would be preferable. More recent solutions to the vanishing vs. exploding gradient problem attempt to retain the simplicity of vanilla RNNs by initializing or constraining the recurrent weight matrix to be the identity (Le et al., 2015), orthogonal (Henaff et al., 2016; Helfrich et al., 2018) or unitary (Arjovsky et al., 2016). While merely initialization-based solutions, however, may be unstable and quickly dissolve during training, orthogonal or unitary constraints, on the other hand, are too restrictive for reconstructing DS, and more generally from a computational perspective as well (Kerg et al., 2019): For instance, neither chaotic behavior (that requires diverging directions) nor multi-stability, that is the coexistence of several distinct attractors, are possible. Here we therefore suggest a different solution to the problem which takes inspiration from computational neuroscience: Supported by experimental evidence (Daie et al., 2015; Brody et al., 2003), line or plane attractors have been suggested as a dynamical mechanism for maintaining arbitrary information in working memory (Seung, 1996; Machens et al., 2005), a goal-related active form of shortterm memory. A line or plane attractor is a continuous set of marginally stable fixed points to which the system’s state converges from some neighborhood, while along the line itself there is neither connor divergence (Fig. 1A). Hence, a line attractor will perform a perfect integration of inputs and retain updated states indefinitely, while a slightly detuned line attractor will equip the system with arbitrarily slow time constants (Fig. 1B). This latter configuration has been suggested as a dynamical basis for neural interval timing (Durstewitz, 2003; 2004). The present idea is to exploit this dynamical setup for long short-term memory and arbitrary slow time scales by forcing part of the RNN’s subspace toward a plane (line) attractor configuration through specifically designed regularization terms. Specifically, our goal here is not so much to beat the state of the art on long short-term memory tasks, but rather to address the exploding vs. vanishing gradient problem within a simple, dynamically tractable RNN, optimized for DS reconstruction and interpretation. For this we build on piecewiselinear RNNs (PLRNNs) (Koppe et al., 2019; Monfared & Durstewitz, 2020b) which employ ReLU activation functions. PLRNNs have a simple mathematical structure (see eq. 1) which makes them dynamically interpretable in the sense that many geometric properties of the system’s state space can in principle be computed analytically, including fixed points, cycles, and their stability (Suppl. 6.1.2; Koppe et al. (2019); Monfared & Durstewitz (2020a)), i.e. do not require numerical techniques (Sussillo & Barak, 2013). Moreover, PLRNNs constitute a type of piecewise linear (PWL) map for which many important bifurcations have been comparatively well characterized (Monfared & Durstewitz, 2020a; Avrutin et al., 2019). PLRNNs can furthermore be translated into equivalent continuous time ordinary differential equation (ODE) systems (Monfared & Durstewitz, 2020b) which comes with further advantages for analysis, e.g. continuous flow fields (Fig. 1A,B). We retain the PLRNN’s structural simplicity and analytical tractability while mitigating the exploding vs. vanishing gradient problem by adding special regularization terms for a subset of PLRNN units to the loss function. These terms are designed to push the system toward line attractor configurations, without strictly enforcing them, along some – but not all – directions in state space. We further establish a tight mathematical relationship between the PLRNN dynamics and the behavior of its gradients during training. Finally, we demonstrate that our approach outperforms LSTM and other, initialization-based, methods on a number of ‘classical’ machine learning benchmarks (Hochreiter & Schmidhuber, 1997). Much more importantly in the present DST context, we demonstrate that our new regularization-supported inference efficiently captures all relevant time scales when reconstructing challenging nonlinear DS with multiple short- and long-range phenomena. 2 RELATED WORK Dynamical systems reconstruction. From a natural science perspective, the goal of reconstructing or identifying the underlying DS is substantially more ambitious than (and different from) building a system that ‘merely’ yields good ahead predictions: In DS identification we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties (see section 3.5, Fig. S2; Kantz & Schreiber (2004)). Earlier work using RNNs for DS reconstruction (Roweis & Ghahramani, 2002; Yu et al., 2005) mainly focused on inferring the posterior over latent trajectories Z = {z1, . . . ,zT } given time series data X = {x1, . . . ,xT }, p(Z|X), and on ahead predictions (Lu et al., 2017), as does much of the recent work on variational inference of DS (Duncker et al., 2019; Zhao & Park, 2020; Hernandez et al., 2020). Although this enables insight into the dynamics along the empirically observed trajectories, both – posterior inference and good ahead predictions – do not per se guarantee that the inferred models can generate the underlying attractor geometries on their own (see Fig. S2, Koppe et al. (2019)). In contrast, if fully generative reconstruction of the underlying DS in this latter sense were achieved, formal analysis or simulation of the resulting RNN equations could provide a much deeper understanding of the dynamical mechanisms underlying empirical observations (Fig. 1 C). Some approaches geared toward this latter goal of full DS reconstruction make specific structural assumptions about the form of the DS equations (‘white box approach’; Meeds et al. (2019); Raissi (2018); Gorbach et al. (2017)), e.g. based on physical or biological domain knowledge, and focus on estimating the system’s latent states and parameters, rather than approximating an unknown DS based on the observed time series information alone (‘black box approach’). Others (Trischler & D’Eleuterio, 2016; Brunton et al., 2016; Champion et al., 2019) attempt to approximate the flow field, obtained e.g. by numerical differentiation, directly through basis expansions or neural networks. However, numerical derivatives are problematic for their high variance and other numerical issues (Raissi, 2018; Baydin et al., 2018; Chen et al., 2017). Another factor to consider is that in many biological systems like the brain the intrinsic dynamics are highly stochastic with many noise sources, like probabilistic synaptic release (Stevens, 2003). Models that do not explicitly account for dynamical process noise (Ayed et al., 2019; Champion et al., 2019; Rudy et al., 2019) are therefore less suited and more vulnerable to model misspecification. Finally, some fully probabilistic models for DS reconstruction based on GRU (Fraccaro et al., 2016), LSTM (Zheng et al., 2017; Vlachas et al., 2018), or radial basis function (Zhao & Park, 2020) networks, are not easily interpretable and amenable to DS analysis in the sense defined in sect. 3.3. Most importantly, none of these previous approaches consider the long-range dependency problem within more easily tractable RNNs for DS. Long-range dependency problems in RNNs. Error gradients in vanilla RNNs tend to either explode or vanish due to the large product of derivative terms that results from recursive application of the chain rule over time steps (Hochreiter, 1991; Bengio et al., 1994; Hochreiter & Schmidhuber, 1997). To address this issue, RNNs with gated memory cells (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) have been specifically designed, but their more complicated mathematical structure makes them less amenable to a systematic DS analysis. Even simple objects like fixed points of these systems have to be found by numerical techniques (Sussillo & Barak, 2013; Jordan et al., 2019). Thus, approaches which retain the simplicity of vanilla RNNs while solving the exploding vs. vanishing gradients problem would be desirable. Recently, Le et al. (2015) observed that initialization of the recurrent weight matrixW to the identity in ReLU-based RNNs may yield performance en par with LSTMs on standard machine learning benchmarks. Talathi & Vartak (2016) expanded on this idea by initializing the recurrence matrix such that its largest absolute eigenvalue is 1. Later work en- forced orthogonal (Henaff et al., 2016; Helfrich et al., 2018; Jing et al., 2019) or unitary (Arjovsky et al., 2016) constraints on the recurrent weight matrix during training. While this appears to yield long-term memory performance sometimes superior to that of LSTMs (but see (Henaff et al., 2016)), these networks are limited in their computational power (Kerg et al., 2019). This may be a consequence of the fact that RNNs with orthogonal recurrence matrix are quite restricted in the range of dynamical phenomena they can produce, e.g. chaotic attractors are not possible since (locally) diverging eigen-directions are disabled. Our approach therefore is to establish line/plane attractors only along some but not all directions in state space, and to only push the RNN toward these configurations but not strictly enforce them, such that convergence or (local) divergence of RNN dynamics is still possible. We furthermore implement these concepts through regularization terms in the loss functions, rather than through mere initialization. This way plane attractors are encouraged throughout training without fading away. 3 MODEL FORMULATION AND THEORETICAL ANALYSIS 3.1 BASIC MODEL FORMULATION Assume we are given two multivariate time series S = {st} and X = {xt}, one we will denote as ‘inputs’ (S) and the other as ‘outputs’ (X). In the ‘classical’ (supervised) machine learning setting, we usually wish to map S on X through a RNN with latent state equation zt = Fθ (zt−1, st) and outputs xt ∼ pλ (xt|zt), as for instance in the ‘addition problem’ (Hochreiter & Schmidhuber, 1997). In DS reconstruction, in contrast, we usually have a dense time seriesX from which we wish to infer (unsupervised) the underlying DS, where S may provide an additional forcing function or sparse experimental inputs or perturbations. While our focus in this paper is on this latter task, DS reconstruction, we will demonstrate that our approach brings benefits in both these settings. Here we consider for the latent model a PLRNN (Koppe et al., 2019) which takes the form zt = Azt−1 +Wφ(zt−1) +Cst + h+ εt, εt ∼ N (0,Σ), (1) where zt ∈ RM×1 is the hidden state (column) vector of dimensionM ,A ∈ RM×M a diagonal and W ∈ RM×M an off-diagonal matrix, st ∈ RK×1 the external input of dimension K, C ∈ RM×K the input mapping, h ∈ RM×1 a bias, and εt a Gaussian noise term with diagonal covariance matrix diag(Σ) ∈ RM+ . The nonlinearity φ(z) is a ReLU, φ(z)i = max(0, zi), i ∈ {1, . . . ,M}. This specific formulation represents a discrete-time version of firing rate (population) models as used in computational neuroscience (Song et al., 2016; Durstewitz, 2017; Engelken et al., 2020). We will assume that the latent RNN states zt are coupled to the actual observations xt through a simple observation model of the form xt = Bg(zt) + ηt, ηt ∼ N (0,Γ) (2) in the case of observations xt ∈ RN×1, whereB ∈ RN×M is a factor loading matrix, g some (usually monotonic) nonlinear transfer function (e.g., ReLU), and diag(Γ) ∈ RN+ the diagonal covariance matrix of the Gaussian observation noise, or through a softmax function in case of categorical observations xi,t ∈ {0, 1} (see Suppl. 6.1.7 for details). 3.2 REGULARIZATION APPROACH First note that by letting A = I , W = 0, and h = 0 in eq. 1, every point in z space will be a marginally stable fixed point of the system, leading it to perform a perfect integration of external inputs as in parametric working memory (Machens et al., 2005; Brody et al., 2003).1 This is similar in spirit to Le et al. (2015) who initialized RNN parameters such that it performs an identity mapping for zi,t ≥ 0. However, here 1) we use a neuroscientifically motivated network architecture (eq. 1) that enables the identity mapping across the variables’ entire support, zi,t ∈ [−∞,+∞], which we conjecture will be of advantage for establishing long short-term memory properties, 2) we encourage 1Note that this very property of marginal stability required for input integration also makes the system sensitive to noise perturbations directly on the manifold attractor. Interestingly, this property has indeed been observed experimentally for real neural integrator systems (Major et al., 2004; Mizumori & Williams, 1993). this mapping only for a subset Mreg ≤M of units (Fig. S1), leaving others free to perform arbitrary computations, and 3) we stabilize this configuration throughout training by introducing a specific L2 regularization for parameters A, W , and h in eq. 1. When embedded into a larger, (locally) convergent system, we will call this configuration more generally a manifold attractor. That way, we divide the units into two types, where the regularized units serve as a memory that tends to decay very slowly (depending on the size of the regularization term), while the remaining units maintain the flexibility to approximate any underlying DS, yet retaining the simplicity of the original PLRNN (eq. 1). Specifically, the following penalty is added to the loss function (Fig. S1): Lreg = τA Mreg∑ i=1 (Ai,i − 1)2 + τW Mreg∑ i=1 M∑ j=1 j 6=i W 2i,j + τh Mreg∑ i=1 h2i (3) (Recall from sect. 3.1 thatA is a diagonal andW is an off-diagonal matrix.) While this formulation allows us to trade off, for instance, the tendency toward a manifold attractor (A → I , h → 0) vs. the sensitivity to other units’ inputs (W → 0), for all experiments performed here a common value, τA = τW = τh = τ , was assumed for the three regularization factors. We will refer to (z1 . . . zMreg ) as the regularized (‘memory’) subsystem, and to (zMreg+1 . . . zM ) as the non-regularized (‘computational’) subsystem. Note that in the limit τ →∞ exact manifold attractors would be enforced. 3.3 THEORETICAL ANALYSIS We will now establish a tight connection between the PLRNN dynamics and its error gradients. Similar ideas appeared in Chang et al. (2019), but these authors focused only on fixed point dynamics, while here we will consider the more general case including cycles of any order. First, note that by interpretability of model eq. 1 we mean that it is easily amenable to a rigorous DS analysis: As shown in Suppl. 6.1.2, we can explicitly determine all the system’s fixed points and cycles and their stability. Moreover, as shown in Monfared & Durstewitz (2020b), we can – under certain conditions – transform the PLRNN into an equivalent continuous-time (ODE) piecewise-linear system, which brings further advantages for DS analysis. Let us rewrite eq. 1 in the form zt = F (zt−1) = (A+WDΩ(t−1))zt−1 + h := WΩ(t−1) zt−1 + h, (4) where DΩ(t−1) is the diagonal matrix of outer derivatives of the ReLU function evaluated at zt−1 (see Suppl. 6.1.2), and we ignore external inputs and noise terms for now. Starting from some initial condition z1, we can recursively develop zT as (see Suppl. 6.1.2 for more details): zT = F T−1(z1) = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (5) Likewise, for some common loss function L(A,W ,h) = ∑T t=2 Lt, we can recursively develop the derivatives w.r.t. weights wmk (and similar for components ofA and h) as ∂L ∂wmk = T∑ t=2 ∂Lt ∂zt ∂zt ∂wmk , with ∂zt ∂wmk = 1(m,k)DΩ(t−1) zt−1 (6) + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j)zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk , where 1(m,k) is an M ×M indicator matrix with a 1 for the (m, k)’th entry and 0 everywhere else. Observing that eqs. 5 and 6 contain similar product terms which determine the system’s long-term behavior, our first theorem links the PLRNN dynamics to its total error gradients: Theorem 1. Consider a PLRNN given by eq. 4, and assume that it converges to a stable fixed point, say zt∗1 := z∗1, or a k-cycle (k > 1) with the periodic points {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, for T →∞. Suppose that, for k ≥ 1 and i ∈ {0, 1, · · · , k − 1}, σmax(WΩ(t∗k−i)) = ∥∥WΩ(t∗k−i)∥∥ < 1, where WΩ(t∗k−i) denotes the Jacobian of the system at zt∗k−i and σmax indicates the largest singular value of a matrix. Then, the 2-norms of the tensors collecting all derivatives, ∥∥∂zT ∂W ∥∥ 2 ,∥∥∂zT ∂A ∥∥ 2 , ∥∥∂zT ∂h ∥∥ 2 , will be bounded from above, i.e. will not diverge for T →∞. Proof. See Suppl. sect. 6.1 (subsection 6.1.3). While Theorem 1 is a general statement about PLRNN dynamics and total gradients, our next theorem more specifically provides conditions under which Jacobians linking temporally distant states zT and zt, T t, will neither vanish nor explode in the regularized PLRNN: Theorem 2. Assume a PLRNN with matrix A + W partitioned as in Fig. S1, i.e. with the first Mreg rows corresponding to those of an M ×M identity matrix. Suppose that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1, i.e. converges to a kcycle with k ≥ 1. Then, for the full system (z1 . . . zM ), the 2-norm of the Jacobians connecting temporally distal states zT and zt will be bounded from above and below for all T > t, i.e. ∞ > ρup ≥ ∥∥∥∂zT∂zt ∥∥∥2 = ∥∥∥∏t<k≤T WΩ(k)∥∥∥2 ≥ ρlow > 0. In particular, for state variables ziT and zjt such that i ∈ {Mreg + 1, · · · ,M} and j ∈ {1, · · · ,Mreg}, i.e. that connect states from the ‘memory’ to those of the ‘computational’ subsystem, one also has∞ > λup ≥ ∣∣∣∂ziT∂zjt ∣∣∣ ≥ λlow > 0 as T − t→∞, i.e. these derivatives will never vanish nor explode. Proof. See Suppl. sect. 6.1 (subsection 6.1.4). The bounds ρup, ρlow, λup, λlow, are given in Suppl. sect. 6.1.4. We remark that when the regularization conditions are not exactly met, i.e. when parametersA andW slightly deviate from those in Fig. S1, memory (and gradients) may ultimately dissipate, but only very slowly, as actually required for temporal processes with very slow yet not infinite time constants (Fig. 1B). 3.4 TRAINING PROCEDURES For the (supervised) machine learning problems, all networks were trained by stochastic gradient descent (SGD) to minimize the squared-error loss between estimated and actual outputs for the addition and multiplication problems, and the cross entropy loss for sequential MNIST (see Suppl. 6.1.7). Adam (Kingma & Ba, 2014) from PyTorch package (Paszke et al., 2017) was used as the optimizer, with a learning rate of 0.001, gradient clip parameter of 10, and batch size of 500. SGD was stopped after 100 epochs and the fit with the lowest loss across all epochs was taken, except for LSTM which was allowed to run for up to 200 epochs as it took longer to converge (Fig. S10). For comparability, the PLRNN latent state dynamics eq. 1 was assumed to be deterministic in this setting (i.e., Σ = 0), g(zt) = zt and Γ = IN in eq. 2. For the regularized PLRNN (rPLRNN), penalty eq. 3 was added to the loss function. For the (unsupervised) DS reconstruction problems, the fully probabilistic, generative RNN eq. 1 was considered. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear state space model (Durbin & Koopman, 2012) with observation and process noise, and an Expectation-Maximization (EM) algorithm that efficiently exploits the model’s piecewise linear structure (Durstewitz, 2017; Koppe et al., 2019) was used to solve for the parameters by maximum likelihood. Details are given in Suppl. 6.1.5. All code used here will be made openly available at https://github.com/DurstewitzLab/reg-PLRNN. 3.5 PERFORMANCE MEASURES For the machine learning benchmarks we employed the same criteria as used for optimization (MSE or cross-entropy, Suppl. 6.1.7) as performance metrics, evaluated across left-out test sets. In addition, we report the relative frequency Pcorrect of correctly predicted trials across the test set (see Suppl. 6.1.7 for details). For DS reconstruction problems, it is not sufficient or even sensible to judge a method’s ability to infer the underlying DS purely based on some form of (ahead-)prediction error like the MSE defined on the time series itself (Ch.12 in Kantz & Schreiber (2004)). Rather, we require that the inferred model can freely reproduce (when no longer guided by the data) the underlying attractor geometries and state space properties. This is not automatically guaranteed for a model that yields agreeable ahead predictions on a time series (Fig. S2A; cf. Koppe et al. (2019); Wood (2010)). We therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between true and reproduced probability distributions across states in state space to quantify how well an inferred PLRNN captured the underlying dynamics, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) (see Suppl. 6.1.6 for more details). 4 NUMERICAL EXPERIMENTS 4.1 MACHINE LEARNING BENCHMARKS Although not our prime interest here, we first examined how the rPLRNN would fare on supervised machine learning benchmarks where inputs (S) are to be mapped onto target outputs (X) across long time spans (i.e., requiring long short-term maintenance of information), namely the addition and multiplication problems (Talathi & Vartak, 2016; Hochreiter & Schmidhuber, 1997), and sequential MNIST (LeCun et al., 2010). Details of these experimental setups are in Suppl. 6.1.7. Performance of the rPLRNN (eq. 1, eq. 3) on all 3 benchmarks was compared to several other models summarized in Suppl. Table 1. To achieve a meaningful comparison, all models have the same number M = 40 (based on Fig. S3) of hidden states (which gives LSTMs overall about 4 times as many trainable parameters). On all three problems the rPLRNN outperforms all other tested methods, including LSTM, iRNN (RNN initialized by the identity matrix as in Le et al. (2015)), and a version of the orthogonal RNN (oRNN; Vorontsov et al. (2017)) (similar results were obtained for other settings of M and batch size). LSTM performs even worse than iRNN and iPLRNN (PLRNN initialized with the identity as the iRNN), although it had 4 times as many parameters and was given twice as many epochs (and thus opportunities) for training, as it also took longer to converge (Fig. S10). In addition, the iPLRNN tends to perform slightly better than the iRNN on all three problems, suggesting that the specific structure eq. 1 of the PLRNN that allows for a manifold attractor across the variables’ full range may be advantageous to begin with, while the regularization further improves performance. 4.2 NUMERICAL EXPERIMENTS ON DYNAMICAL SYSTEMS WITH DIFFERENT TIME SCALES While it is encouraging that the rPLRNN may perform even better than several previous approaches to the vanishing vs. exploding gradients problem, our major goal here was to examine whether our regularization scheme would help with the (unsupervised) identification of DS that harbor widely different time scales. To test this, we used a biophysical, bursting cortical neuron model with one voltage (V ) and two conductance recovery variables (see Durstewitz (2009)), one slow (h) and one fast (n; Suppl. 6.1.8). Reproduction of this DS is challenging since it produces very fast spikes on top of a slow nonlinear oscillation (Fig. 3D). Only short time series (as in scientific data) of length T = 1500 from this model were provided for training. rPLRNNs with M = {8 . . . 18} states were trained, with the regularization factor varied within τ ∈ {0, 101, 102, 103, 104, 105}/T . Note that for τ = 0 (no regularization), the approach reduces to the standard PLRNN (Koppe et al., 2019). Fig. 3A confirms our intuition that stronger regularization leads to better DS reconstruction as assessed by the KL divergence between true and generated state distributions (similar results were obtained with ahead-prediction errors as a metric, Fig. S4A), accompanied by a likewise decrease in the MSE between the power spectra of true (suppl. eq. 55) and generated (rPLRNN) voltage traces (Fig. 3B). Fig. 3D gives an example of voltage traces (V ) and the slower of the two gating variables (h; see Fig. S5A for variable n) freely simulated (i.e., sampled) from the autonomously running rPLRNN. This illustrates that our model is in principle capable of capturing both the stiff spike dynamics and the slower oscillations in the second gating variable at the same time. Fig. 3C provides more insight into how the regularization worked: While the high frequency components (> 50 Hz) related to the repetitive spiking activity hardly benefited from increasing τ , there was a strong reduction in the MSE computed on the power spectrum for the lower frequency range (≤ 50 Hz), suggesting that increased regularization helps to map slowly evolving components of the dynamics. This result is more general as shown in Fig. S6 for another DS example. In contrast, an orthogonality (Vorontsov et al., 2017) or plain L2 constraint on weight matrices did not help at all on this problem (Fig. S4B). Further insight into the dynamical mechanisms by which the rPLRNN solves the problem can be obtained by examining the latent dynamics: As shown in Fig. 3E (see also Fig. S5), regularized states indeed help to map the slow components of the dynamics, while non-regularized states focus on the fast spikes. These observations further corroborate the findings in Fig. 3C and Fig. S6C. 4.3 REGULARIZATION PROPERTIES AND MANIFOLD ATTRACTORS In Figs. 2 and 3 we demonstrated that the rPLRNN is able to solve problems and reconstruct dynamics that involve long-range dependencies. Figs. 3A,B furthermore directly confirm that solutions improve with stronger regularization, while Figs. 3C,E give insight into the mechanism by which the regularization works. To further verify empirically that our specific form of regularization, eq. 3, is important, Fig. 2 also shows results for a PLRNN with standard L2 norm on a fraction of Mreg/M = 0.5 states (L2pPLRNN). Fig. S7 provides additional results for PLRNNs with L2 norm on all weights and for vanilla L2-regularized RNNs. All these systems fell far behind the performance of the rPLRNN on all tasks tested. Moreover, Fig. 4 reveals that the specific regularization proposed indeed encourages manifold attractors, and that this is not achieved by a standard L2 regularization: In contrast to L2PLRNN, as the regularization factor τ is increased, more and more of the maximum absolute eigenvalues around the system’s fixed points (computed according to eq. 8, sect. 6.1.2) cluster on or near 1, indicating directions of marginal stability in state space. Also, the deviations from 1 become smaller for strongly regularized PLRNNs (Fig. 4B,D), indicating a higher precision in attractor tuning. Fig. S9 in addition confirms that rPLRNN parameters are increasingly driven toward values that would support manifold attractors with stronger regularization. Fig. 3E furthermore suggests that both regularized and non-regularized states are utilized to map the full dynamics. But how should the ratio Mreg/M be chosen in practice? While for the problems here this meta-parameter was determined through ‘classical’ grid-search and cross-validation, Figs. S3 C – E suggest that the precise setting of Mreg/M is actually not overly important: Nearly optimal performance is achieved for a broader range Mreg/M ∈ [0.3, 0.6] on all problems tested. Hence, in practice, setting Mreg/M = 0.5 should mostly work fine. 5 CONCLUSIONS In this work we introduced a simple solution to the long short-term memory problem in RNNs that retains the simplicity and tractability of PLRNNs, yet does not curtail their universal computational capabilities (Koiran et al., 1994; Siegelmann & Sontag, 1995) and their ability to approximate arbitrary DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998; Trischler & D’Eleuterio, 2016). We achieved this by adding regularization terms to the loss function that encourage the system to form a ‘memory subspace’ (Seung, 1996; Durstewitz, 2003) which would store arbitrary values for, if unperturbed, arbitrarily long periods. At the same time we did not rigorously enforce this constraint, which allowed the system to capture slow time scales by slightly departing from a perfect manifold attractor. In neuroscience, this has been discussed as a dynamical mechanism for regulating the speed of flow in DS and learning of arbitrary time constants not naturally included qua RNN design (Durstewitz, 2003; 2004) (Fig. 1B). While other RNN architectures, including vanilla RNNs, can, in principle, also develop line attractors to solve specific tasks (Maheswaranathan et al., 2019), they are generally much harder to train to achieve this and may exhibit less precise attractor tuning (cf. Fig. 4), which is needed to bridge long time scales (Durstewitz, 2003). Moreover, part of the PLRNN’s latent space was not regularized at all, leaving the system enough degrees of freedom for realizing arbitrary computations or dynamics (see also Fig. S11 for a chaotic example). We showed that the rPLRNN is en par with or outperforms initialization-based approaches, orthogonal RNNs, and LSTMs on a number of classical benchmarks. More importantly, however, the regularization strongly facilitates the identification of challenging DS with widely different time scales in PLRNN-based algorithms for DS reconstruction. Similar regularization schemes as proposed here (eq. 3) may, in principle, also be designed for other architectures, but the convenient mathematical form of the PLRNN makes their implementation particularly powerful and straightforward. ACKNOWLEDGEMENTS This work was funded by grants from the German Research Foundation (DFG) to DD (Du 354/10-1, Du 354/8-2 within SPP 1665) and to GK (TRR265: A06 & B08), and under Germany’s Excellence Strategy – EXC-2181 – 390900948 (’Structures’). 6 APPENDIX 6.1 SUPPLEMENTARY TEXT 6.1.1 Simple exact PLRNN solution for addition problem The exact PLRNN parameter settings (cf. eq. 1, eq. 2) for solving the addition problem with 2 units (cf. Fig. 1C) are as follows: A = ( 1 0 0 0 ) ,W = ( 0 1 0 0 ) ,h = ( 0 −1 ) ,C = ( 0 0 1 1 ) ,B = (1 0) (7) 6.1.2 Computation of fixed points and cycles in PLRNN Consider the PLRNN in the form of eq. 4. For clarity, let us define dΩ(t) := (d1, d2, · · · , dM ) as an indicator vector with dm(zm,t) := dm = 1 for all states zm,t > 0 and zeros otherwise, and DΩ(t) := diag(dΩ(t)) as the diagonal matrix formed from this vector. Note that there are at most 2M distinct matricesWΩ(t) as defined in eq. 4, depending on the sign of the components of zt. If h = 0 and WΩ(t) is the identity matrix, then the map F becomes the identity map and so every point z will be a fixed point of F . Otherwise, the fixed points of F can be found solving the equation F (z∗1) = z∗1 as z∗1 = (I −WΩ(t∗1))−1 h = H∗1 h, (8) where z∗1 = zt∗1 = zt∗1−1, if det(I − WΩ(t∗1)) = PWΩ(t∗1)(1) 6= 0, i.e. WΩ(t∗1) has no eigenvalue equal to 1. Stability and type of fixed points (node, saddle, spiral) can then be determined from the eigenvalues of the JacobianA+WDΩ(t∗1) = WΩ(t∗1) (Strogatz (2015)). For k > 1, solving F k(z∗k) = z∗k, one can obtain a k-cycle of the map F with the periodic points {z∗k, F (z∗k), F 2(z∗k), · · · , F k−1(z∗k)}. For this, we first compute F k as follows: zt = F (zt−1) = WΩ(t−1) zt−1 + h, zt+1 = F 2(zt−1) = F (zt) = WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t) + I ) h, zt+2 = F 3(zt−1) = F (zt+1) = WΩ(t+1)WΩ(t)WΩ(t−1) zt−1 + ( WΩ(t+1)WΩ(t) +WΩ(t+1) + I ) h, ... zt+(k−1) = F k(zt−1) = k+1∏ i=2 WΩ(t+(k−i)) zt−1 + [ k∑ j=2 k−j+2∏ i=2 WΩ(t+(k−i)) + I ] h, (9) in which ∏k+1 i=2 WΩ(t+(k−i)) = WΩ(t+(k−2))WΩ(t+(k−3)) · · · WΩ(t−1). Assuming t+(k−1) := t∗k, then the k-cycle is given by the fixed point of the k-times iterated map F k as z∗k = ( I − k∏ i=1 WΩ(t∗k−i) )−1 [ k∑ j=2 k−j+1∏ i=1 WΩ(t∗k−i) + I ] h = H∗k h, (10) where z∗k = zt∗k = zt∗k−k, provided that I − ∏k i=1WΩ(t∗k−i) is invertible. That is det ( I − ∏k i=1WΩ(t∗k−i) ) = P∏k i=1WΩ(t∗k−i) (1) 6= 0 and ∏k i=1WΩ(t∗k−i) := WΩ∗k has no eigenvalue equal to 1. As for the fixed points, we can determine stability of the k-cycle from the eigenvalues of the Jacobians ∏k i=1WΩ(t∗k−i). It may also be helpful to spell out the recursions in eq. 5 and eq. 6 in section 3.3 in a bit more detail. Analogously to the derivations above, for t = 1, 2, . . . , T we can recursively compute z2, z3, . . . ,zT (T ∈ N) as z2 = F (z1) = WΩ(1) z1 + h, z3 = F 2(z1) = F (z2) = WΩ(2)WΩ(1) z1 + ( WΩ(2) + I ) h, ... zT = F T−1(z1) = F (zT−1) = WΩ(T−1)WΩ(T−2) · · ·WΩ(1) z1 + ( WΩ(T−1)WΩ(T−2) · · ·WΩ(2) +WΩ(T−1)WΩ(T−2) · · ·WΩ(3) + · · ·+WΩ(T−1) + I ) h = T−1∏ i=1 WΩ(T−i) z1 + [ T−2∑ j=1 T−j−1∏ i=1 WΩ(T−i) + I ] h = T−1∏ i=1 WΩ(T−i) z1 + [ T−1∑ j=2 j−1∏ i=1 WΩ(T−i) + I ] h. (11) Likewise, we can write out the derivatives eq. 6 more explicitly as ∂zt ∂wmk = ∂F (zt−1) ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) )∂zt−1 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2) zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )∂zt−2 ∂wmk = 1(m,k)DΩ(t−1) zt−1 + ( A+WDΩ(t−1) ) 1(m,k)DΩ(t−2)zt−2 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) ) 1(m,k)DΩ(t−3)zt−3 + ( A+WDΩ(t−1) )( A+WDΩ(t−2) )( A+WDΩ(t−3) )∂zt−3 ∂wmk = · · · = 1(m,k)DΩ(t−1) zt−1 + t−2∑ j=2 ( j−1∏ i=1 WΩ(t−i) ) 1(m,k)DΩ(t−j) zt−j + t−2∏ i=1 WΩ(t−i) ∂z2 ∂wmk (12) where ∂z2∂wmk = ( ∂z1,2 ∂wmk · · · ∂zM,2∂wmk ) with ∂zl,2 ∂wmk = 0∀ l 6= m and ∂zm,2∂wmk = dkzk,1. The derivatives w.r.t. the elements ofA and h can be expanded in a similar way, only that the termsDΩ(t) zt on the last line of eq. 12 need to be replaced by just zt for ∂zt∂amm , and by just a vector of 1’s for ∂zt ∂hm (also, in these cases, the indicator matrix will be the diagonal matrix 1(m,m)). 6.1.3 Proof of Theorem 1 To state the proof, let us rewrite the derivatives of the loss function L(W ,A,h) = ∑T t=1 Lt in the following tensor form: ∂L ∂W = T∑ t=1 ∂Lt ∂W , where ∂Lt ∂W = ∂Lt ∂zt ∂zt ∂W , (13) for which the 3D tensor ∂zt ∂W = ∂z1,t ∂W ∂z2,t ∂W ... ∂zM,t ∂W (14) of dimension M ×M ×M , consists of all the gradient matrices ∂zi,t ∂W = ∂zi,t ∂w11 ∂zi,t ∂w12 · · · ∂zi,t∂w1M ∂zi,t ∂w21 ∂zi,t ∂w22 · · · ∂zi,t∂w2M ... ∂zi,t ∂wM1 ∂zi,t ∂wM2 · · · ∂zi,t∂wMM := ∂zi,t ∂w1∗ ∂zi,t ∂w2∗ ... ∂zi,t ∂wM∗ , i = 1, 2, · · · ,M, (15) where wi∗ ∈ RM is a row-vector. Now, suppose that {z1, z2, z3, . . .} is an orbit of the system which converges to a stable fixed point, i.e. lim T→∞ zT = z ∗k. Then lim T→∞ zT = lim T→∞ ( WΩ(T−1) zT−1 + h ) = z∗1 = WΩ(t∗1) z ∗1 + h, (16) and so lim T→∞ ( WΩ(T−1) ) z∗1 = WΩ(t∗1) z ∗1. (17) Assume that lim T→∞ ( WΩ(T−1) ) = L. Since eq. 17 holds for every z∗1, then substituting z∗1 = eT1 = (1, 0, · · · , 0)T in eq. 17, we can prove that the first column of L equals the first column of WΩ(t∗1). Performing the same procedure for z∗1 = eTi , i = 2, 3, · · · ,M , yields lim T→∞ WΩ(T−1) = WΩ(t∗1). (18) Also, for every i ∈ N (1 < i <∞) lim T→∞ WΩ(T−i) = WΩ(t∗1), (19) i.e. ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ ≤ . (20) Thus, ∥∥WΩ(T−i)∥∥− ∥∥WΩ(t∗1)∥∥ ≤ ∥∥WΩ(T−i) −WΩ(t∗1)∥∥ gives ∀ > 0 ∃N ∈ N s.t. T − i ≥ N =⇒ ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ . (21) Since T − 1 > T − 2 > · · · > T − i ≥ N , so ∀ > 0 ∥∥WΩ(T−i)∥∥ ≤ ∥∥WΩ(t∗1)∥∥+ , i = 1, 2, · · · , T −N. (22) Hence ∀ > 0 ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ T−N∏ i=1 ∥∥WΩ(T−i)∥∥ ≤ (∥∥WΩ(t∗1)∥∥+ )T−N . (23) If ∥∥WΩ(t∗1)∥∥ < 1, then for any < 1, considering ̄ ≤ +‖WΩ(t∗1)‖2 < 1, it is concluded that∥∥∥∥∥ limT→∞ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ = limT→∞ ∥∥∥∥∥ T−N∏ i=1 WΩ(T−i) ∥∥∥∥∥ ≤ limT→∞(∥∥WΩ(t∗1)∥∥+ ̄)T−N = 0. (24) Therefore lim T→∞ T−1∏ i=1 WΩ(T−i) = 0. (25) If the orbit {z1, z2, z3, . . .} tends to a stable k-cycle (k > 1) with the periodic points {F k(z∗k), F k−1(z∗k), F k−2(z∗k), · · · , F (z∗k)} = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1)}, then, denoting the stable k-cycle by Γk = {zt∗k , zt∗k−1, · · · , zt∗k−(k−1), zt∗k , zt∗k−1, · · · , zt∗k−(k−1), · · · }, (26) we have lim T→∞ d(zT ,Γk) = 0. (27) Hence, there exists a neighborhood U of Γk and k sub-sequences {zTkn}∞n=1, {zTkn+1}∞n=1, · · · , {zTkn+(k−1)}∞n=1 of the sequence {zT }∞T=1 such that these sub-sequences belong to U and (i) zTkn+s = F k(zTk(n−1)+s), s = 0, 1, 2, · · · , k − 1, (ii) lim T→∞ zTkn+s = zt∗k−s, s = 0, 1, 2, · · · , k − 1, (iii) for every zT ∈ U there is some s ∈ {0, 1, 2, · · · , k − 1} such that zT ∈ {zTkn+s}∞n=1. In this case, for every zT ∈ U with zT ∈ {zTkn+s}∞n=1 we have lim T→∞ zT = zt∗k−s for some s = 0, 1, 2, · · · , k − 1. Therefore, continuity of F implies that lim T→∞ F (zT ) = F (zt∗k−s) and so lim T→∞ ( WΩ(T ) zT + h ) = WΩ(t∗k−s) zt∗k−s + h. (28) Thus, similarly, we can prove that ∃ s ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T ) = WΩ(t∗k−s). (29) Analogously, for every i ∈ N (1 < i <∞) ∃ si ∈ {0, 1, 2, · · · , k − 1} s.t. lim T→∞ WΩ(T−i) = WΩ(t∗k−si), (30) On the other hand, ∥∥WΩ(t∗k−si)∥∥ < 1 for all si ∈ {0, 1, 2, · · · , k − 1}. So, without loss of generality, assuming max 0≤si≤k−1 {∥∥WΩ(t∗k−si)∥∥} = ∥∥WΩ(t∗k)∥∥ < 1, (31) we can again obtain some relations similar to eq. 23-eq. 25 for t∗k, k ≥ 1. Since {zT−1}∞T=1 is a convergent sequence, so it is bounded, i.e. there exists a real number q > 0 such that ||zT−1|| ≤ q for all T ∈ N. Furthermore, ∥∥DΩ(T−1)∥∥ ≤ 1 for all T . Therefore, by eq. 12 and eq. 23 (for t∗k, k ≥ 1)∥∥∥∥ ∂zT∂wmk ∥∥∥∥ = ∣∣∣∣∣ ∣∣∣∣∣1(m,k)DΩ(T−1) zT−1 + T−1∑ j=2 ( j−1∏ i=1 WΩ(T−i) ) 1(m,k)DΩ(T−j) zT−j + T−1∏ i=1 WΩ(T−i) DΩ(1) z1 ∣∣∣∣∣ ∣∣∣∣∣ (32) ≤ ‖zT−1‖+ [ T−1∑ j=2 ∥∥∥∥∥ j−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖zT−j‖ ] + ∥∥∥∥∥ T−1∏ i=1 WΩ(T−i) ∥∥∥∥∥ ‖z1‖ ≤ q ( 1 + T−1∑ j=2 (∥∥WΩ(t∗k)∥∥+ ̄)j−1 )+ (∥∥WΩ(t∗k)∥∥+ ̄)T−1 ‖z1‖ . (33) Thus, by ∥∥WΩ(t∗k)∥∥+ ̄ < 1, we have lim T→∞ ∥∥∥∥ ∂zT∂wmk ∥∥∥∥ ≤ q(1 + ∥∥WΩ(t∗k)∥∥+ ̄ 1− ∥∥WΩ(t∗k)∥∥− ̄ ) =M <∞, (34) i.e., by eq. 14 and eq. 15, the 2-norm of total gradient matrices and hence ∥∥ ∂zt ∂W ∥∥ 2 will not diverge (explode) under the assumptions of Theorem 1. Analogously, we can prove that ∥∥∂zT ∂A ∥∥ 2 and ∥∥∂zT ∂h ∥∥ 2 will not diverge either. Since, similar as in the derivations above, it can be shown that relation eq. 34 is true for ∥∥∥ ∂zT∂amm ∥∥∥ with q = q̄, where q̄ is the upper bound of ‖zT ‖, as {zT }∞T=1 is convergent. Furthermore, relation eq. 34 also holds for∥∥∥ ∂zT∂hm ∥∥∥ with q = 1. Remark 2.1. By eq. 24 the Jacobian parts ∥∥∥∂zT∂zt ∥∥∥2 connecting any two states zT and zt, T > t, will not diverge either. Corollary 2.1. The results of Theorem 1 are also true ifWΩ(t∗k) is a normal matrix with no eigenvalue equal to one. Proof. If WΩ(t∗k) is normal, then ∥∥WΩ(t∗k)∥∥ = ρ(WΩ(t∗k)) < 1 which satisfies the conditions of Theorem 1. 6.1.4 Proof of Theorem 2 LetA,W andDΩ(k), t < k ≤ T , be partitioned as follows A = ( Ireg O T O Anreg ) , W = ( Oreg O T S Wnreg ) , DΩ(k) = ( Dkreg O T O Dknreg ) , (35) where IMreg×Mreg := Ireg ∈ RMreg×Mreg ,OMreg×Mreg := Oreg ∈ RMreg×Mreg , O,S ∈ R(M−Mreg)×Mreg , A{Mreg+1:M,Mreg+1:M} := Anreg ∈ R(M−Mreg)×(M−Mreg) is a diagonal submatrix,W{Mreg+1:M,Mreg+1:M} := Wnreg ∈ R(M−Mreg)×(M−Mreg) is an off-diagonal sub-matrix (cf. Fig. S1). Moreover, DkMreg×Mreg := D k reg ∈ RMreg×Mreg and Dk{Mreg+1:M,Mreg+1:M} := Dknreg ∈ R(M−Mreg)×(M−Mreg) are diagonal sub-matrices. Then, we have ∏ t<k≤T WΩ(k) = ∏ t<k≤T ( Ireg O T SDkreg Anreg +WnregD k nreg ) := ∏ t<k≤T ( Ireg O T SDkreg W k nreg ) = ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg. ) (36) Therefore, considering the 2-norm, we obtain∥∥∥∥∂zT∂zt ∥∥∥∥ = ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∥ ( Ireg O T SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg ) SDt+jreg ∏ t<k≤T W k nreg )∥∥∥∥∥ <∞. (37) Moreover 1 ≤ max{1, ρ(WT−t)} = ρ ( ∏ t<k≤T WΩ(k) ) ≤ ∥∥∥∥∥∥ ∏ t<k≤T WΩ(k) ∥∥∥∥∥∥ = ∥∥∥∥∂zT∂zt ∥∥∥∥ (38) where WT−t := ∏ t<k≤T W k nreg . Therefore, eq. 37 and eq. 38 yield 1 ≤ ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ ≤ ρup <∞. Furthermore, we assumed that the non-regularized subsystem (zMreg+1 . . . zM ), if considered in isolation, satisfies Theorem 1. Hence, similar to the proof of Theorem 1, it is concluded that lim T→∞ T∏ k=t W knreg = Onreg. (39) On the other hand, by definition ofDΩ(k), for every t < k ≤ T , we have ∥∥Dkreg∥∥ ≤ 1 and so∥∥SDkreg∥∥ ≤ ‖S‖ ∥∥Dkreg∥∥ ≤ ‖S‖ , (40) which, in accordance with the the assumptions of Theorem 1, by convergence of∑T j=2 ∏t+j−1 k=t+1 ∥∥W knreg∥∥ implies lim T→∞ ∥∥∥∥∥∥SDt+1reg + T∑ j=2 ( t+j−1∏ k=t+1 W knreg ) SDt+jreg ∥∥∥∥∥∥ ≤ ‖S‖ ( 1 + lim T→∞ T∑ j=2 t+j−1∏ k=t+1 ∥∥W knreg∥∥) ≤ ‖S‖Mnreg. (41) Thus, denoting Q := SDt+1reg + ∑T j=2 (∏ t<k≤t+j−1W k nreg SD t+j reg ) , from eq. 41 we deduce that λmax ( lim T→∞ (QTQ) ) = lim T→∞ ρ(QTQ) ≤ lim T→∞ ∥∥QTQ∥∥ = lim T→∞ ‖Q‖2 ≤ ( ‖S‖Mnreg )2 . (42) Now, if T − t tends to∞, then eq. 37, eq. 39 and eq. 42 result in 1 = ρlow ≤ ∥∥∥∥∂zT∂zt ∥∥∥∥ = σmax( ( Ireg O T Q Onreg )) = √ λmax(Ireg + lim T→∞ (QTQ)) = ρup < ∞. (43) Remark 2.2. If ‖S‖ = 0, then ∥∥∥∂zT∂zt ∥∥∥→ 1 as T − t→∞. 6.1.5 Details on EM algorithm and DS reconstruction For DS reconstruction we request that the latent RNN approximates the true generating system of equations, which is a taller order than learning the mapping S → X or predicting future values in a time series (cf. sect. 3.5).2 This point has important implications for the design of models, inference algorithms and performance metrics if the primary goal is DS reconstruction rather than ‘mere’ time series forecasting.3 In this context we consider the fully probabilistic, generative RNN eq. 1. Together with eq. 2 (where we take g(zt) = φ(zt)) this gives the typical form of a nonlinear 2By reconstructing the governing equations we mean their approximation in the sense of the universal approximation theorems for DS (Funahashi & Nakamura, 1993; Kimura & Nakano, 1998), i.e. such that the behavior of the reconstructed system becomes dynamically equivalent to that of the true underlying system. 3In this context we also remark that models which include longer histories of hidden activations (Yu et al., 2019), as in many statistical time series models (Fan & Yao, 2003), are not formally valid DS models anymore since they violate the uniqueness of flow in state space (Strogatz, 2015). state space model (Durbin & Koopman, 2012) with observation and process noise. We solve for the parameters θ = {A,W ,C,h,µ0,Σ,B,Γ} by maximum likelihood, for which an efficient Expectation-Maximization (EM) algorithm has recently been suggested (Durstewitz, 2017; Koppe et al., 2019), which we will summarize here. Since the involved integrals are not tractable, we start off from the evidence-lower bound (ELBO) to the log-likelihood which can be rewritten in various useful ways: log p(X|θ) ≥ EZ∼q[log pθ(X,Z)] +H (q(Z|X)) = log p(X|θ)−DKL (q(Z|X)‖pθ(Z|X)) =: L (θ, q) (44) In the E-step, given a current estimate θ∗ for the parameters, we seek to determine the posterior pθ (Z|X) which we approximate by a global Gaussian q(Z|X) instantiated by the maximizer (mode) Z∗ of pθ(Z|X) as an estimator of the mean, and the negative inverse Hessian around this maximizer as an estimator of the state covariance, i.e. E[Z|X] ≈ Z∗ = arg max Z log pθ(Z|X) = arg max Z [log pθ(X|Z) + log pθ(Z)− log pθ(X)] = arg max Z [log pθ(X|Z) + log pθ(Z)] , (45) since Z integrates out in pθ(X) (equivalently, this result can be derived from a Laplace approximation to the log-likelihood, log p(X|θ) ≈ log pθ(X|Z∗)+log pθ(Z∗)− 12 log |−L ∗|+const, where L∗ is the Hessian evaluated at the maximizer). We solve this optimization problem by a fixed-point iteration scheme that efficiently exploits the model’s piecewise linear structure, as detailed below. Using this approximate posterior for pθ(Z|X), based on the model’s piecewise-linear structure most of the expectation values Ez∼q [φ(z)], Ez∼q [ φ(z)zT ] , and Ez∼q [ φ(z)φ(z)T ] , could be solved for (semi-)analytically (where z is the concatenated vector form of Z, see below). In the M-step, we seek θ∗ := arg maxθ L(θ, q∗), assuming proposal density q∗ to be given from the E-step, which for a Gaussian observation model amounts to a simple linear regression problem (see Suppl. eq. 49). To force the PLRNN to really capture the underlying DS in its governing equations, we use a previously suggested (Koppe et al., 2019) stepwise annealing protocol that gradually shifts the burden of fitting the observationsX from the observation model eq. 2 to the latent RNN model eq. 1 during training, the idea of which is to establish a mapping from latent states Z to observations X first, fixing this, and then enforcing the temporal consistency constraints implied by eq. 1 while accounting for the actual observations. Now we briefly outline the fixed-point-iteration algorithm for solving the maximization problem in eq. 45 (for more details see Durstewitz (2017); Koppe et al. (2019)). Given a Gaussian latent PLRNN and a Gaussian observation model, the joint density p(X,Z) will be piecewise Gaussian, hence eq. 45 piecewise quadratic in Z. Let us concatenate all state variables across m and t into one long column vector z = (z1,1, . . . , zM,1, . . . , z1,T , . . . , zM,T ) T, arrange matrices A, W into large MT ×MT block tri-diagonal matrices, define dΩ := ( 1z1,1>0,1z2,1>0, . . . ,1zM,T>0 )T as an indicator vector with a 1 for all states zm,t > 0 and zeros otherwise, and DΩ := diag(dΩ) as the diagonal matrix formed from this vector. Collecting all terms quadratic, linear, or constant in z, we can then write down the optimization criterion in the form Q∗Ω(z) = − 1 2 [zT ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ ) z − zT (v0 +DΩv1)− (v0 +DΩv1)T z] + const. (46) In essence, the algorithm now iterates between the two steps: 1. Given fixedDΩ, solve z∗ = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 · (v0 +DΩv1) (47) 2. Given fixed z∗, recomputeDΩ until either convergence or one of several stopping criteria (partly likelihood-based, partly to avoid loops) is reached. The solution may afterwards be refined by one quadratic programming step. Numerical experiments showed this algorithm to be very fast and efficient (Durstewitz, 2017; Koppe et al., 2019). At z∗, an estimate of the state covariance is then obtained as the inverse negative Hessian, V = ( U0 +DΩU1 +U T 1DΩ +DΩU2DΩ )−1 . (48) In the M-step, using the proposal density q∗ from the E-step, the solution to the maximization problem θ∗ := arg max θ L(θ, q∗), can generally be expressed in the form θ∗ = (∑ t E [ αtβ T t ])(∑ t E [ βtβ T t ])−1 , (49) where, for the latent model, eq. 1, αt = zt and βt := [ zTt−1, φ(zt−1) T, sTt , 1 ]T ∈ R2M+K+1, and for the observation model, eq. 2, αt = xt and βt = g (zt). 6.1.6 More details on DS performance measure As argued before (Koppe et al., 2019; Wood, 2010), in DS reconstruction we require that the RNN captures the underlying attractor geometries and state space properties. This does not necessarily entail that the reconstructed system could predict future time series observations more than a few time steps ahead, and vice versa. For instance, if the underlying attractor is chaotic, even if we had the exact true system available, with a tiny bit of noise trajectories starting from the same initial condition will quickly diverge and ahead-prediction errors become essentially meaningless as a DS performance metric (Fig. S2B). To quantify how well an inferred PLRNN captured the underlying dynamics we therefore followed Koppe et al. (2019) and used the Kullback-Leibler divergence between the true and reproduced probability distributions across states in state space, thus assessing the agreement in attractor geometries (cf. Takens (1981); Sauer et al. (1991)) rather than in precise matching of time series, DKL (ptrue(x)‖pgen(x|z)) ≈ K∑ k=1 p̂ (k) true(x) log ( p̂ (k) true(x) p̂ (k) gen(x|z) ) , (50) where ptrue(x) is the true distribution of observations across state space (not time!), pgen(x|z) is the distribution of observations generated by running the inferred PLRNN, and the sum indicates a spatial discretization (binning) of the observed state space. We emphasize that p̂(k)gen(x|z) is obtained from freely simulated trajectories, i.e. drawn from the prior p̂(z) specified by eq. 1, not from the inferred posteriors p̂(z|xtrain). In addition, to assess reproduction of time scales by the inferred PLRNN, the average MSE between the power spectra of the true and generated time series was computed, as displayed in Fig. 3B–C. The measure DKL introduced above only works for situations where the ground truth ptrue(X) is known. Following Koppe et al. (2019), we next briefly indicate how a proxy for DKL may be obtained in empirical situations where no ground truth is available. Reasoning that for a well reconstructed DS the inferred posterior pinf(z|x) given the observations should be a good representative of the prior generative dynamics pgen(z), one may use the Kullback-Leibler divergence between the distribution over latent states, obtained by sampling from the prior density pgen(z), and the (dataconstrained) posterior distribution pinf(z|x) (where z ∈ RM×1 and x ∈ RN×1), taken across the system’s state space: DKL (pinf(z|x)‖pgen(z)) = ∫ z∈RM×1 pinf(z|x) log pinf(z|x) pgen(z) dz (51) As evaluating this integral is difficult, one could further approximate pinf(z|x) and pgen(z) by Gaussian mixtures across trajectories, i.e. pinf(z|x) ≈ 1T ∑T t=1 p(zt|x1:T ) and pgen(z) ≈ 1 L ∑L l=1 p(zl|zl−1), where the mean and covariance of p(zt|x1:T ) and p(zl|zl−1) are obtained by marginalizing over the multivariate distributions p(Z|X) and pgen(Z), respectively, yielding E[zt|x1:T ], E[zl|zl−1], and covariance matrices Var(zt|x1:T ) and Var(zl|zl−1). Supplementary eq. 51 may then be numerically approximated through Monte Carlo sampling (Hershey & Olsen, 2007) by DKL (pinf(z|x)‖pgen(z)) ≈ 1 n n∑ i=1 log pinf(z (i)|x) pgen(z(i)) , z(i) ∼ pinf(z|x) (52) Alternatively, there is also a variational approximation of eq. 51 available (Hershey & Olsen, 2007): DvariationalKL (pinf(z|x)‖pgen(z)) ≈ 1 T T∑ t=1 log ∑T j=1 e −DKL(p(zt|x1:T )‖p(zj |x1:T ))∑T k=1 e −DKL(p(zt|x1:T )‖p(zk|zk−1)) , (53) where the KL divergences in the exponentials are among Gaussians for which we have an analytical expression. 6.1.7 More details on benchmark tasks and model comparisons We compared the performance of our rPLRNN to the other models summarized in Suppl. Table 1 on the following three benchmarks requiring long short-term maintenance of information (Talathi & Vartak (2016); Hochreiter & Schmidhuber (1997)): 1) The addition problem of time length T consists of 100 000 training and 10 000 test samples of 2× T input series S = {s1, . . . , sT }, where entries s1,: ∈ [0, 1] are drawn from a uniform random distribution and s2,: ∈ {0, 1} contains zeros except for two indicator bits placed randomly at times t1 < 10 and t2 < T/2. Constraints on t1 and t2 are chosen such that every trial requires a long memory of at least T/2 time steps. At the last time step T , the target output of the network is the sum of the two inputs in s1,: indicated by the 1-entries in s2,:, x target T = s1,t1 + s1,t2 . 2) The multiplication problem is the same as the addition problem, only that the product instead of the sum has to be produced by the RNN as an output at time T , xtargetT = s1,t1 · s1,t2 . 3) The MNIST dataset (LeCun et al., 2010) consists of 60 000 training and 10 000 28 × 28 test images of hand written digits. To make this a time series problem, in sequential MNIST the images are presented sequentially, pixel-by-pixel, scanning lines from upper left to bottom-right, resulting in time series of fixed length T = 784. For training on the addition and multiplication problems, the mean squared-error loss across R samples, L = 1R ∑R n=1 ( x̂ (n) T − x (n) T )2 , between estimated and actual outputs was used, while the cross-entropy loss L = ∑R n=1 ( − ∑10 i=1 x (n) i,T log(p̂ (n) i,T ) ) was employed for sequential MNIST, where p̂i,t := p̂t (xi,t = 1|zt) = ( eBi,:zt ) N∑ j=1 eBj,:zt −1 , (54) with xi,t ∈ {0, 1}, ∑ i xi,t = 1. We remark that as long as the observation model takes the form of a generalized linear model (Fahrmeir & Tutz, 2001), as assumed here, meaning may be assigned to the latent states zm by virtue of their association with specific sets of observations xn through the factor loading matrix B. This adds another layer of model interpretability (besides its accessibility in DS terms). The large error bars in Fig. 2 at the transition from good to bad performance result from the fact that the networks mostly learn these tasks in an all-or-none fashion. While the rPLRNN in general outperformed the pure initialization-based models (iRNN, npRNN, iPLRNN), confirming that a manifold attractor subspace present at initialization may be lost throughout training, we conjecture that this difference in performance will become even more pronounced as noise levels or task complexity increase. 6.1.8 More details on single neuron model The neuron model used in section 4.2 is described by −CmV̇ = gL(V − EL) + gNam∞(V )(V − ENa) + gKn(V − EK) + gMh(V − EK) + gNMDAσ(V )(V − ENMDA) (55) ḣ = h∞(V )− h
1. What is the focus of the paper regarding regularization schemes for training vanilla Relu RNN? 2. What are the strengths of the proposed approach, particularly in connecting RNN dynamics and gradient theoretically? 3. What are the weaknesses of the paper, especially regarding the setting of RNN and working memory? 4. Do you have any questions regarding the regularization scheme and its effectiveness in tackling exploding and vanishing gradients? 5. What are some minor concerns regarding the paper's content?
Review
Review This paper proposes a regularization scheme for training vanilla Relu RNN to tackle the exploding and vanishing gradients issue. The work eases the analysis of RNN in the dynamical system point of view and connects the RNN dynamics and gradient theoretically. The experiments show the competitive performance comparing to LSTM. The vanilla RNNs simplify the analysis as dynamical systems without those gates in LSTMs and GRUs while suffering exploding or vanishing gradients. The idea of tackling the issue while keeping the simplicity sounds very interesting and useful. I am leaning on accepting this manuscript if the authors could address my concerns. Sec 3.2. The authors mentioned the connection between the particular setting of RNN and working memory. This setting leads to a system without any autonomous dynamics that is not a typical model for working memory (e.g. attracting fixed-points, line attrators and etc.). I disagree that the space is neurally stable since all the states are sensitive to perturbation and cannot persist a stable memory. Sec 3.2. Following the so-called "neurally stable" setting, why the term on A in Eq.3 only regularizes the diagonal rather than penalizes the deviation from the identity matrix? This regularization does not lead to the tendency of A -> I mentioned in the text. Furthurmore, the regularization does not guarantee either A -> I or W -> 0 so that the resulting Mreg subspace does not have the "memory" property. Eq.3. How is Mreg determined? Minor: iPLRNN is used before defined.
ICLR
Title CaPC Learning: Confidential and Private Collaborative Learning Abstract Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1 N/A Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1 1 INTRODUCTION The predictions of machine learning (ML) systems often reveal private information contained in their training data (Shokri et al., 2017; Carlini et al., 2019) or test inputs. Because of these limitations, legislation increasingly regulates the use of personal data (Mantelero, 2013). The relevant ethical ∗Equal contributions, authors ordered alphabetically. †Work done while the author was at Vector Institute. ‡Equal contributions, authors ordered alphabetically. 1Code is available at: https://github.com/cleverhans-lab/capc-iclr. to evaluate Enc(q) onMi and outputs encrypted logits Enc(ri). 1b Each answering party, Pi, generates a random vector r̂i, and sends Enc(ri − r̂i) to the querying party, Pi∗ , who decrypts to get ri − r̂i. 1c Each answering party Pi runs Yao’s garbled circuit protocol (Yi) with querying party Pi∗ to get si for Pi∗ and ŝi for Pi s.t. si + ŝi is the one-hot encoding of argmax of logits. 2 Each answering party sends ŝi to the privacy guardian (PG). The PG sums ŝi from each Pi and adds Laplacian or Gaussian noise for DP. The querying party sums si from each Yi computation. 3 The PG and the querying party run Yao’s garbled circuit Ys to obtain argmax of querying party and PG’s noisy share. The label is output to the querying party. concerns prompted researchers to invent ML algorithms that protect the privacy of training data and confidentiality of test inputs (Abadi et al., 2016; Konečnỳ et al., 2016; Juvekar et al., 2018). Yet, these algorithms require a large dataset stored either in a single location or distributed amongst billions of participants. This is the case for example with federated learning (McMahan et al., 2017). Prior algorithms also assume that all parties are collectively training a single model with a fixed architecture. These requirements are often too restrictive in practice. For instance, a hospital may want to improve a medical diagnosis for a patient using data and models from other hospitals. In this case, the data is stored in multiple locations, and there are only a few parties collaborating. Further, each party may also want to train models with different architectures that best serve their own priorities. We propose a new strategy that lets fewer heterogeneous parties learn from each other collaboratively, enabling each party to improve their own local models while protecting the confidentiality and privacy of their data. We call this Confidential and Private Collaborative (CaPC) learning. Our strategy improves on confidential inference (Boemer, 2020) and PATE, the private aggregation of teacher ensembles (Papernot et al., 2017). Through structured applications of these two techniques, we design a strategy for inference that enables participants to operate an ensemble of heterogeneous models, i.e. the teachers, without having to explicitly join each party’s data or teacher model at a single location. This also gives each party control at inference, because inference requires the agreement and participation of each party. In addition, our strategy provides measurable confidentiality and privacy guarantees, which we formally prove. We use the running example of a network of hospitals to illustrate our approach. The hospitals participating in CaPC protocol need guarantees on both confidentiality (i.e., data from a hospital can only be read by said hospital) and privacy (i.e., no hospital can infer private information about other hospitals’ data by observing their predictions). First, one hospital queries all the other parties over homomorphic encryption (HE), asking them to label an encrypted input using their own teacher models. This can prevent the other hospitals from reading the input (Boemer et al., 2019), an improvement over PATE, and allows the answering hospitals to provide a prediction to the querying hospital without sharing their teacher models. The answering hospitals use multi-party computation (MPC) to compute an aggregated label, and add noise during the aggregation to obtain differential privacy guarantees (Dwork et al., 2014). This is achieved by a privacy guardian (PG), which then relays the aggregated label to the querying hospital. The PG only needs to be semi-trusted: we operate under the honest-but-curious assumption. The use of MPC ensures that the PG cannot decipher each teacher model’s individual prediction, and the noise added via noisy argmax mechanism gives differential privacy even when there are few participants. This is a significant advantage over prior decentralized approaches like federated learning, which require billions of participants to achieve differential privacy, because the sensitivity of the histogram used in our aggregation is lower than that of the gradients aggregated in federated learning. Unlike our approach, prior efforts involving few participants thus had to prioritize model utility over privacy and only guarantee confidentiality (Sheller et al., 2020). Finally, the querying hospital can learn from this confidential and private label to improve their local model. Since the shared information is a label rather than a gradient, as used by federated learning, CaPC participants do not need to share a common model architecture; in fact, their architectures can vary throughout the participation in the protocol. This favors model development to a degree which is not possible in prior efforts such as federated learning. We show how participants can instantiate various forms of active and online learning with the labels returned by our protocol: each party participating in the CaPC protocol may (a) identify deficiencies of its model throughout its deployment and (b) finetune the model with labels obtained by interacting with other parties. Intuitively, we achieve the analog of a doctor querying colleagues for a second opinion on a difficult diagnostic, without having to reveal the patient’s medical condition. This protocol leads to improvements in both the accuracy and fairness (when there is a skew in the data distribution of each participating hospital) of model predictions for each of the CaPC participants. To summarize, our contributions are the following: • We introduce CaPC learning: a confidential and private collaborative learning platform that provides both confidentiality and privacy while remaining agnostic to ML techniques. • Through a structured application of homomorphic encryption, secure MPC, and private aggregation, we design a protocol for CaPC. We use two-party deep learning inference and design an implementation of the noisy argmax mechanism with garbled circuits. • Our experiments on SVHN and CIFAR10 demonstrate that CaPC enables participants to collaborate and improve the utility of their models, even in the heterogeneous setting where the architectures of their local models differ, and when there are only a few participants. • Further, when the distribution of data drifts across participating parties, we show that CaPC significantly improves fairness metrics because querying parties benefit from knowledge learned by other parties on different data distributions, which is distilled in their predictions. • We release the source code for reproducing all our experiments. 2 BACKGROUND Before introducing CaPC, we first go over elements of cryptography and differential privacy that are required to understand it. Detailed treatment of these topics can be found in Appendices A and B. 2.1 CRYPTOGRAPHIC PRELIMINARIES FOR CONFIDENTIALITY The main cryptographic tool used in CaPC is secure multi-party computation (MPC) (Yao, 1986). MPC allows a set of distrusting parties to jointly evaluate a function on their input without revealing anything beyond the output. In general, most practical MPC protocols can be classified into two categories: 1) generic MPC protocols that can compute any function with the above security goal (Malkhi et al., 2004); and 2) specialized MPC protocols that can be used to compute only selected functions (e.g., private set intersection (Pinkas et al., 2020), secure machine learning (Mohassel & Zhang, 2017)). Although specialized MPC protocols are less general, they are often more efficient in execution time. Protocols in both categories use similar cryptographic building blocks, including (fully) homomorphic encryption (Gentry, 2009), secret sharing (Shamir, 1979), oblivious transfer (Rabin, 2005), garbled circuits (Yao, 1986). To understand our protocol, it is not necessary to know all details about these cryptographic building blocks and thus we describe them in Appendix A.1. Our work uses these cryptographic preliminaries for secure computation at prediction time, unlike recent approaches, which explore new methods to achieving confidentiality at training time (Huang et al., 2020a;b). The cryptographic protocol designed in this paper uses a specialized MPC protocol for securely evaluating a private ML model on private data, and a generic two-party computation protocol to compute an argmax in different forms. For the generic two-party computation, we use a classical Yao’s garbled-circuit protocol that can compute any function in Boolean circuit. For secure classification of neural networks, our protocol design is flexible to work with most existing protocols (Boemer et al., 2020; 2019; Gilad-Bachrach et al., 2016; Mishra et al., 2020). Most existing protocols are different in how they handle linear layers (e.g. convolution) and non-linear layers (e.g. ReLU). For instance, one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation, or using classical MPC techniques with more communication but less computation. Other works (Juvekar et al., 2018) use a hybrid of both and thus enjoy further improvement in performance (Mishra et al., 2020). We discuss it in more details in Appendix A.2. 2.2 DIFFERENTIAL PRIVACY Differential privacy is the established framework for measuring the privacy leakage of a randomized algorithm (Dwork et al., 2006). In the context of machine learning, it requires the training algorithm to produce statistically indistinguishable outputs on any pair of datasets that only differ by one data point. This implies that an adversary observing the outputs of the training algorithm (e.g., the model’s parameters, or its predictions) can improve its guess at most by a bounded probability when inferring properties of the training data points. Formally, we have the following definition. Definition 1 (Differential Privacy). A randomized mechanism M with domain D and range R satisfies (ε, δ)-differential privacy if for any subset S ⊆ R and any adjacent datasets d, d′ ∈ D, i.e. ‖d− d′‖1 ≤ 1, the following inequality holds: Pr [M(d) ∈ S] ≤ eεPr [M(d′) ∈ S] + δ (1) In our work, we obtain differential privacy by post-processing the outputs of an ensemble of models with the noisy argmax mechanism of Dwork et al. (2014) (for more details on differential privacy, please refer to Appendix B), à la PATE (Papernot et al., 2017). We apply the improved analysis of PATE (Papernot et al., 2018) to compute the privacy guarantees obtained (i.e., a bound on ε). Our technique differs from PATE in that each of the teacher models is trained by different parties whereas PATE assumes a centralized learning setting where all of the training and inference is performed by a single party. Note that our technique is used at inference time, which differs from recent works in differential privacy that compare neuron pruning during training with mechanisms satisfying differential privacy (Huang et al., 2020c). We use cryptography to securely decentralize computations. 3 THE CAPC PROTOCOL We now introduce our protocol for achieving both confidentiality and privacy in collaborative (CaPC) learning. To do so, we formalize and generalize our example of collaborating hospitals from Section 1. 3.1 PROBLEM DESCRIPTION A small number of parties {Pi}i∈[1,K], each holding a private dataset Di = {(xj , yj or∅)j∈[1,Ni]} and capable of fitting a predictive modelMi to it, wish to improve the utility of their individual models via collaboration. Due to the private nature of the datasets in question, they cannot directly share data or by-products of data (e.g., model weights) with each other. Instead, they will collaborate by querying each other for labels of the inputs about which they are uncertain. In the active learning paradigm, one party Pi∗ poses queries in the form of data samples x and all the other parties {Pi}i 6=i∗ together provide answers in the form of predicted labels ŷ. Each model {Mi}i∈[1,K] can be exploited in both the querying phase and the answering phase, with the querying party alternating between different participants {Pi}i∈[1,K] in the protocol. Threat Model. To obtain the strong confidentiality and privacy guarantees that we described, we require a semi-trusted third party called the privacy guardian (PG). We assume that the PG does not collude with any party and that the adversary can corrupt any subset of C parties {Pi}i∈[1,C]. When more than one party gets corrupted, this has no impact on the confidentiality guarantee, but the privacy budget obtained will degrade by a factor proportional to C because the sensitivity of the aggregation mechanism increases (see Section 3.3). We work in the honest-but-curious setting, a commonly adopted assumption in cryptography which requires the adversary to follow the protocol description correctly but will try to infer information from the protocol transcript. 3.2 CAPC PROTOCOL DESCRIPTION Our protocol introduces a novel formulation of the private aggregation of teachers, which implements two-party confidential inference and secret sharing to improve upon the work of Papernot et al. (2017) and guarantee confidentiality. Recall that the querying party Pi∗ initiates the protocol by sending an encrypted input x to all answering parties Pi, i 6= i∗. We use sk and pk to denote the secret and public keys owned by party Pi∗ . The proposed protocol consists of the following steps: 1. For each i 6= i∗, Pi (with model parametersMi as its input) and Pi∗ (with x, sk, pk as its input) run a secure two-party protocol. As the outcome, Pi obtains ŝi and Pi∗ obtains si such that si + ŝi = OneHot(arg max(ri)) where ri are the predicted logits. This step could be achieved by the following: a) Pi∗ and Pi run a secure two-party ML classification protocol such that Pi∗ learns nothing while Pi learns Encpk(ri), where ri are the predicted logits. b) Pi generates a random vector r̂i , performs the following computation on the encrypted data Encpk(ri)− Encpk(r̂i) = Encpk(ri − r̂i), and sends the encrypted difference to Pi∗ , who decrypts and obtains (ri − r̂i). c) Pi (with r̂i as input) and Pi∗ (with ri − r̂i as input) engage in Yao’s two-party garbledcircuit protocol to obtain vector si for Pi∗ and vector ŝi for Pi, such that si + ŝi = OneHot(arg max(ri)). 2. Pi sends ŝi to the PG. The PG computes ŝ = ∑ i 6=i∗ ŝi + DPNoise( ), where DPNoise() is element-wise Laplacian or Gaussian noise whose variance is calibrated to obtain a desired differential privacy guarantee ε; whereas Pi∗ computes s = ∑ i6=i∗ si. 3. The PG and Pi∗ engage in Yao’s two-party garbled-circuit protocol for computing the argmax: Pi∗ gets arg max(ŝ + s) and the PG gets nothing. Next, we elaborate on the confidentiality and privacy guarantees achieved by CaPC. 3.3 CONFIDENTIALITY AND DIFFERENTIAL PRIVACY GUARANTEES Confidentiality Analysis. We prove in Appendix E that the above protocol reveals nothing to Pi or the PG and only reveals the final noisy results to Pi∗ . The protocol is secure against a semi-honest adversary corrupting any subset of parties. Intuitively, the proof can be easily derived based on the security of the underlying components, including two-party classification protocol, secret sharing, and Yao’s garbled circuit protocol. As discussed in Section 4.1 and Appendix A.1, for secret sharing of unbounded integers, we need to make sure the random padding is picked from a domain much larger than the maximum possible value being shared. Given the above, a corrupted Pi∗ cannot learn anything aboutMi of the honest party due to the confidentiality guarantee of the secure classification protocol; similarly, the confidentiality of x against corrupted Pi is also protected. Intermediate values are all secretly shared (and only recovered within garbled circuits) so they are not visible to any party. Differential Privacy Analysis. Here, any potential privacy leakage in terms of differential privacy is incurred by the answering parties {Pi}i6=i∗ for their datasets {Di}i 6=i∗ , because these parties share the predictions of their models. Before sharing these predictions to Pi∗ , we follow the PATE protocol: we compute the histogram of label counts ŷ, then add Laplacian or Gaussian noise using a sensitivity of 1, and finally return the argmax of ŷσ to Pi∗ . Since Pi∗ only sees this noisily aggregated label, both the data-dependent and data-independent differential privacy analysis of PATE apply to Pi∗ (Papernot et al., 2017; 2018). Thus, when there are enough parties with high consensus, we can obtain a tighter bound on the privacy budget as the true plurality will more likely be returned (refer to Appendix B for more details on how this is achieved in PATE). This setup assumes that only one answering party can be corrupted. If instead C parties are corrupted, the sensitivity of the noisy aggregation mechanism will be scaled by C and the privacy guarantee will deteriorate. There is no privacy leakage to the PG; it does not receive any part of the predictions from {Pi}i 6=i∗ . 4 EXPERIMENTS CaPC aims to improve the model utility of collaborating parties by providing them with new labelled data for training their respective local models. Since we designed the CaPC protocol with techniques for confidentiality (i.e., confidential inference and secret sharing) and differential privacy (i.e., private aggregation), our experiments consider the following three major dimensions: 1. How well does collaboration improve the model utility of all participating parties? 2. What requirements are there to achieve privacy and how can these be relaxed under different circumstances? What is the trade-off between the privacy and utility provided by CaPC? 3. What is the resulting computational cost for ensuring confidentiality? 4.1 IMPLEMENTATION We use the HE-transformer library with MPC (MP2ML) by Boemer (2020) in step 1a of our protocol for confidential two-party deep learning inference. To make our protocol flexible to any private inference library, not just those that return the label predicted by the model (HE-transformer only returns logits), we incorporate steps 1b and 1c of the protocol outside of the private inference library. The EMP toolkit (Wang et al., 2016) for generic two-party computation is used to compute the operations including argmax and sum via the garbled circuits. To secret share the encrypted values, we first convert them into integers over a prime field according to the CKKS parameters, and then perform secret sharing on that domain to obtain perfect secret sharing. We use the single largest logit value for eachMi obtained on its training set Di in plain text to calculate the necessary noise. 4.2 EVALUATION SETUP Collaboration. We use the following for experiments unless otherwise noted. We uniformly sample from the training set in use2, without replacement, to create disjoint partitions, Di, of equal size and identical data distribution for each party. We select K = 50 and K = 250 as the number of parties for CIFAR10 and SVHN, respectively (the number is larger for SVHN because we have more data). We select Q = 3 querying parties, Pi∗ , and similarly divide part of the test set into Q separate private pools for each Pi∗ to select queries, until their privacy budget of is reached (using Gaussian noise with σ = 40 on SVHN and 7 on CIFAR10). We are left with 1, 000 and 16, 032 evaluation data points from the test set of CIFAR10 and SVHN, respectively. We fix = 2 and 20 for SVHN and CIFAR10, respectively (which leads to ≈ 550 queries per party), and report accuracy on the evaluation set. Querying models are retrained on their Di plus the newly labelled data; the difference in accuracies is their accuracy improvement. We use shallower variants of VGG, namely VGG-5 and VGG-7 for CIFAR10 and SVHN, respectively, to accommodate the small size of each party’s private dataset. We instantiate VGG-7 with 6 convolutional layers and one final fully-connected layer, thus there are 7 functional layers overall. Similarly, VGG-5 has 4 convolutional layers followed by a fully connected layer. The ResNet-10 architecture starts with a single convolutional layer, followed by 4 basic blocks with 2 convolutional layers in each block, and ends with a fully-connected layer, giving 10 functional layers in total. The ResNet-8 architecture that we use excludes the last basic block and increases the number of neurons in the last (fully-connected) layer. We present more details on architectures in Appendix F.2. We first train local models for all parties using their non-overlapping private datasets. Next, we run the CaPC protocol to generate query-answer pairs for each querying party. Finally, we retrain the local model of each querying party using the combination of their original private dataset and the newly obtained query-answer pairs. We report the mean accuracy and class-specific accuracy averaged over 5 runs for all retrained models, where each uses a different random seed. Heterogeneity and Data Skew. Where noted, our heterogeneous experiments (recall that this is a newly applicable setting that CaPC enables) use VGG-7, ResNet-8 and ResNet-10 architectures for K 3 parties, each. One model of each architecture is used for each of Q = 3 querying parties. Our data skew experiments use 80% less data samples for the classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and 90% less data for the classes 1 and 2 on SVHN. In turn, unfair ML algorithms perform worse on these specific classes, leading to worse balanced accuracy (see Appendix D). We adopt balanced accuracy instead of other fairness metrics because the datasets we use have no sensitive attributes, making them inapplicable. We employ margin, entropy, and greedy k-center active learning strategies 2For the SVHN dataset, we combine its original training set and extra set to get a larger training set. (described in Appendix C) to encourage ML algorithms to sample more queries from regimes that have been underrepresented and to improve their fairness performance. 4.3 COLLABORATION ANALYSIS We first investigate the benefits of collaboration for improving each party’s model performance in several different settings, namely: homogeneous and heterogeneous model architectures across querying and answering parties, and uniform and non-uniform data sampling for training data. From these experiments, we observe: increased accuracy in both homogeneous settings and heterogeneous settings to all model architectures (Section 4.3.1) and improved balanced accuracy when there is data skew between parties, i.e., non-uniform private data (Section 4.3.2). 4.3.1 UNIFORMLY SAMPLED PRIVATE DATA The first setting we consider is a uniform distribution of data amongst the parties—there is no data drift among parties. Our set up for the uniform data distribution experiments is detailed in Section 4.2. We evaluate the per-class and overall accuracy before and after CaPC in both homogeneous and heterogeneous settings on the CIFAR10 and SVHN datasets. In Figure 2, we see there is a consistent increase in accuracy for each class and overall in terms of mean accuracy across all parties on the test sets. We observe these improvements in both the homogeneous and heterogeneous settings for both datasets tested. As demonstrated in Figure 2, there is a greater climb in mean accuracy for the heterogeneous setting than the homogeneous setting on SVHN. Figures 5, 6, and 7 provide a breakdown of the benefits obtained by each querying party. We can see from these figures that all querying parties observe an increase in overall accuracy in heterogeneous and homogeneous settings with both datasets; additionally, the jump in accuracy is largely constant between different model architectures. In only 6.67% of all cases were any class-specific accuracies degraded, but they still showed a net increase in overall model accuracy. 4.3.2 NON-UNIFORMLY SAMPLED PRIVATE DATA In this section, we focus our analysis on two types of data skew between parties: varying size of data per class and total size of data provided; the setup is described in Section 4.2. To analyze data skew, we explore the balanced accuracy (which measures mean recall on a per-class basis, see Appendix D). We use balanced accuracy in order to investigate aggregate fairness gains offered by CaPC. Random sampling from non-uniform distributions leads to certain pitfalls: e.g., underrepresented classes are not specifically targeted in sampling. Thus, we additionally utilize active learning techniques, namely entropy, margin, and greedy-k-center (see Definitions 6-8 in Appendix C), and analyze balanced accuracy with each strategy. In Figure 3, we see that CaPC has a significant impact on the balanced accuracy when there is data skew between the private data of participating parties. Even random sampling can drastically improve balanced accuracy. Leveraging active learning techniques, we can achieve additional benefits in balanced accuracy. In particular, we observe that entropy and margin sampling achieves the greatest improvement over random sampling in per-class accuracy for the less represented classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and classes 1 and 2 on SVHN. These enhancements can be explained by the underlying mechanisms of margin and entropy sampling because the less-represented classes have a higher margin/entropy; the queries per class for each method are shown in Figure 9. Through these experiments, we show that in data skew settings, the CaPC protocol can significantly improve the fair performance of models (as measured by balanced accuracy), especially when combined with active learning techniques. Note that we see similar trends with (normal) accuracy as well. 4.4 PRIVACY VERSUS UTILITY We now study the trade-off between privacy and utility of our obtained models. Recall that we add Gaussian (or Laplacian) noise to the aggregate of predicted labels of all parties. Under the uniform setting, we choose the standard deviation σ by performing a (random) grid search and choosing the highest noise before a significant loss in accuracy is observed. In doing so, each query uses minimal ε while maximizing utility. Figure 11 in Appendix F shows a sample plot for K = 250 models. For more details on how ε is calculated, please refer to Appendix B. As we increase the number of parties, we can issue more queries for a given privacy budget (ε) which leads to a higher accuracy gain. In Figure 4, we report the accuracy gain achieved using CaPC with various numbers of parties, K. With a fixed total dataset size, increasing the number of parties decreases their training data size, leading to worse performing models. These models see the largest benefit from CaPC but, importantly, we always see a net improvement across all values of K. Number of parties 150 200 250 300 400 Accuracy gain (%) 0.62 1.45 2.39 3.07 3.87 Best ε 3.50 3.32 2.60 2.40 1.91 4.5 COMPUTATIONAL COSTS OF CONFIDENTIALITY The incorporation of confidentiality in CaPC increases computational costs. We segment the analysis of computational overhead of CaPC into three parts corresponding to sequential steps in the protocol: (1) inference, (2) secret sharing between each querying and answering party, and (3) secret sharing between the querying party and the PG. Each of these steps is analyzed in terms of the wall-clock time (in seconds). We use the default encryption setting in HE-transformer and vary the modulus range, N , which denotes the max value of a given plain text number to increase the maximum security level possible. HE-transformer only supports inference on CPUs and is used in step (1). Step (1) with neural network inference using MPC incurs the highest CPU and network costs (see Table 1 and Figure 13 in Appendix F). Even the base level of security increases computational cost by 100X, and high security levels see increases up to 1000X, in comparison to the non-encrypted inference on CPU. Compared to step (1), the rest of the CaPC protocol incurs a negligible overhead to perform secret sharing. Overall, CaPC incurs only a low additional cost over the underlying MP2ML framework, as shown in Figure 13, which enables applicability and scalability as these tools progress. 5 DISCUSSION AND CONCLUSIONS CaPC is a secure and private protocol that protects both the confidentiality of test data and the privacy of training data, which are desired in applications like healthcare and finance. Our framework facilitates collaborative learning using heterogeneous model architectures and separate private datasets, even if the number of parties involved is small. It offers notable advantages over recent methods for learning with multiple participants, such as federated learning, which assumes training of a single fixed model architecture. CaPC does not assume a homogeneous model architecture and allows parties to separately and collaboratively train different models optimized for their own purposes. Federated learning also requires a large number of parties while CaPC provides gains in accuracy with significantly fewer participants, even in contexts where each party already has a model with high accuracy. Notably, CaPC incurs low overhead on top of underlying tools used for secure neural network inference. Through our experiments, we also demonstrate that CaPC facilitates collaborative learning even when there exists non i.i.d (highly skewed) private data among parties. Our experiments show that CaPC improves on the fair performance of participating querying models as indicated by improvements in the balanced accuracy, a common fairness metric. Further, we observe a significant increase in per-class accuracy on less-represented classes on all datasets tested. Notably, CaPC is easily configured to leverage active learning techniques to achieve additional fairness improvement gains or to learn from other heterogeneous models trained with fairness techniques, e.g., with synthetic minority oversampling (Chawla et al., 2002). In future work, we look to analyzing the fairness implications of CaPC in contexts where there is discrimination over a private dataset’s sensitive attributes, not just class labels. In these cases, other fairness metrics like equalized odds and equal opportunity (see Appendix D) can be explored. We note some limitations of the proposed protocol. HE-transformer does not prevent leaking certain aspects of the model architecture, such as the type of non-linear activation functions and presence of MaxPooling layers. CaPC improves upon existing methods in terms of the necessary number of parties; however, it would be favorable to see this number decreased under 50 for better flexibility and applicability in practice. In the face of this last limitation, when there are few physical parties, we can generate a larger number of virtual parties for CaPC, where each physical party subdivides their private dataset into disjoint partitions and trains multiple local models. This would allow CaPC to tolerate more noise injected during aggregation and provide better privacy guarantees. Note that each physical party could select queries using a dedicated strong model instead of the weak models used for answering queries in CaPC. This setting is desirable in cases where separate models are required within a single physical party, for example, in a multi-national organization with per-country models. ACKNOWLEDGMENTS We would like to acknowledge our sponsors, who support our research with financial and in-kind contributions: Microsoft, Intel, CIFAR through the Canada CIFAR AI Chair and AI catalyst programs, NFRF through an Exploration grant, and NSERC COHESA Strategic Alliance. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Finally, we would like to thank members of CleverHans Lab for their feedback, especially: Tejumade Afonja, Varun Chandrasekaran, Stephan Rabanser, and Jonas Guan. A MORE BACKGROUND ON CRYPTOGRAPHY A.1 CRYPTOGRAPHIC BUILDING BLOCKS Homomorphic encryption. Homomorphic encryption defines an encryption scheme such that the encryption and decryption functions are homomorphic between plaintext and ciphertext spaces. Although it is known that fully homomorphic encryption can be constructed based on lattice-based assumptions, most applications only require a weaker version with bounded number of multiplications on each ciphertext. Schemes with this constraint are much more practical, including for example, BGV (Brakerski et al., 2014), CKKS (Cheon et al., 2017), etc. Secret sharing. Secret sharing denotes a scheme in which a datum, the secret, is shared amongst a group of parties by dividing the secret into parts such that each party only has one part, or ‘share’ of the secret. The secret can only be recovered if a certain number of parties conspire to combine their shares. It is easy to construct secret sharing modulo a positive integer. If the application does not allow modular operation, one can still achieve statistically secure secret sharing by using random shares that are much larger than the secret being shared (Evans et al., 2011). Oblivious transfer. Oblivious transfer involves two parties: the sending party and the receiving party. The sending party has two pieces of information, s0 and s1, and the receiver wants to receive sb, where b ∈ {0, 1}, such that the sending party cannot learn b and the receiving party cannot learn s¬b. In general, oblivious transfer requires public-key operations, however, it is possible to execute a large number of oblivious transfers with only a very small number of public-key operations based on oblivious transfer extension (Ishai et al., 2003). Garbled circuits. In Yao’s garbled circuit protocol for two-party computation, each of the two parties assumes a role, that of garbler or that of evaluator. The function f on which to compute each of the two parties’ inputs is described as a Boolean circuit. The garbler randomly generates aliases (termed labels) representing 0 and 1 in the Boolean circuit describing f and replaces the binary values with the generated labels for each wire in the circuit. At each gate in the circuit, which can be viewed as a truth table, the garbler uses the labels of each possible combination of inputs to encrypt the corresponding outputs, and permutes the rows of the truth table. The garbler then uses the generated labels for 0 and 1 to encode their own input data and sends these labels and the garbled Boolean circuit to the evaluator. The evaluator now converts their binary input data to the corresponding labels through a 1-2 oblivious transfer protocol with the garbler. After receiving the labels for their input, the evaluator evaluates the garbled circuit by trying to decrypt each row in the permutable truth tables at each gate using the input labels; only one row will be decryptable at each gate, which is the output label for the outgoing wire from the gate. The evaluator eventually finishes evaluating the garbled circuit and obtains the label for the output of the function f computed on the garbler’s and the evaluator’s input. The garbler then must provide the true value for the output label so that both parties can get the output. A.2 PROTECTING CONFIDENTIALITY USING MPC Neural networks present a challenge to secure multi-party computation protocols due to their unique structure and exploitative combination of linear computations and non-linear activation functions. Cryptographic inference with neural networks can be considered in two party computation case in which one party has confidential input for which they wish to obtain output from a model and the other party stores the model; in many cases the party storing the model also wishes that the model remains secure. Confidential learning and inference with neural networks typically uses homomorphic encryption (HE) or secure multi-party computation (MPC) methods. Many libraries support pure HE or MPC protocols for secure inference of neural networks; a comprehensive list can be viewed in (Boemer et al., 2020). Notably, libraries such as nGraph-HE (Boemer et al., 2019) and CryptoNets (GiladBachrach et al., 2016) provide pure homomorphic encryption solutions to secure neural network inference. nGraph-HE, an extension of graph compiler nGraph, allows secure inference of DNNs through linear computations at each layer using CKKS homomorphic encryption scheme (Cheon et al., 2017; Boemer et al., 2019). CryptoNets similarly permit confidential neural network inference using another leveled homomorphic encryption scheme, YASHE’ (Gilad-Bachrach et al., 2016). On the other hand, several libraries employing primarily MPC methods in secure NN inference frameworks rely on ABY, a tool providing support for common non-polynomial activation functions in NNs through use of both Yao’s GC and GMW. In DL contexts, while pure homomorphic encryption methods maintain model security, their failure to support common non-polynomial activation functions leads to leaking of pre-activation values (feature maps at hidden layers). Tools that use solely MPC protocols avoid leaking pre-activation values as they can guarantee data confidentiality on non-polynomial activation functions but may compromise the security of the model architecture by leaking activation functions or model structure. Recent works on secure NN inference propose hybrid protocols that combine homomorphic encryption schemes, and MPC methods to build frameworks that try to reduce leakages common in pure HE and MPC protocols. Among recent works that use hybrid protocols and do not rely on trusted third parties are Gazelle (Juvekar et al., 2018), Delphi (Mishra et al., 2020), and MP2ML (Boemer et al., 2020). Gazelle, Delphi and MP2ML largely support non-polynomial activation functions encountered in convolutional neural networks, such as maximum pooling and rectified linear unit (ReLU) operations. Gazelle introduced several improvements over previous methods for secure NN inference primarily relating to latency and confidentiality. In particular, Gazelle framework provides homomorphic encryption libraries with low latency implementations of algorithms for single instruction multiple data (SIMD) operations, ciphertext permutation, and homomorphic matrix and convolutional operations, pertinent to convolutional neural networks. Gazelle utilizes kernel methods to evaluate homomorphic operations for linear components of networks, garbled circuits to compute non-linear activation functions confidentially and additive secret sharing to quickly switch between these cryptographic protocols. Delphi builds on Gazelle, optimizing computation of both linear and non-linear com- putations in CNNs by secret sharing model weights in the pre-processing stage to speed up linear computations later, and approximating certain activation functions such as ReLU with polynomials. MP2ML employs nGraph-HE for homomorphic encryption and ABY framework for evaluation of non-linear functions using garbled circuits. B MORE BACKGROUND ON DIFFERENTIAL PRIVACY One of the compelling properties of differential privacy is that it permits the analysis and control of cumulative privacy cost over multiple consecutive computations. For instance, strong composition theorem (Dwork et al., 2010) gives a tight estimate of the privacy cost associated with a sequence of adaptive mechanisms {Mi}i∈I . Theorem 1 (Strong Composition). For ε, δ, δ′ ≥ 0, the class of (ε, δ)-differentially private mechanisms satisfies (ε′, kδ + δ′)-differential privacy under k-fold adaptive composition for: ε′ = ε √ 2k log(1/δ′) + kε(eε − 1) (2) To facilitate the evaluation of privacy leakage resulted by a randomized mechanismM, it is helpful to explicitly define its corresponding privacy loss cM and privacy loss random variableCM. Particularly, the fact thatM is (ε, δ)-differentially private is equivalent to a certain tail bound on CM. Definition 2 (Privacy Loss). Given a pair of adjacent datasets d, d′ ∈ D and an auxiliary input aux, the privacy loss cM of a randomized mechanismM evaluated at an outcome o ∈ R is defined as: cM(o | aux, d, d′) , log Pr[M(aux, d) = o] Pr[M(aux, d′) = o] (3) For an outcome o ∈ R sampled fromM(d), CM(aux, d, d′) takes the value cM(o | aux, d, d′). Based on the definition of privacy loss, Abadi et al. (Abadi et al., 2016) introduced the moments accountant to track higher-order moments of privacy loss random variable and achieved even tighter privacy bounds for k-fold adaptive mechanisms. Definition 3 (Moments Accountant). Given any adjacent datasets d, d′ ∈ D and any auxiliary input aux, the moments accountant of a randomized mechanismM is defined as: αM(λ) , max aux,d,d′ αM(λ | aux, d, d′) (4) where αM(λ | aux, d, d′) , logE[exp(λCM(aux, d, d′))] is obtained by taking the logarithm of the privacy loss random variable. As a natural relaxation to the conventional (ε, δ)-differential privacy, Rényi differential privacy (RDP) (Mironov, 2017) provides a more convenient and accurate approach to estimating privacy loss under heterogeneous composition. Definition 4 (Rényi Divergence). For two probability distributions P and Q defined over R, the Rényi divergence of order λ > 1 between them is defined as: Dλ(P ||Q) , 1 λ− 1 logEx∼Q [ (P (x)/Q(x))λ ] = 1 λ− 1 logEx∼P [ (P (x)/Q(x))λ−1 ] (5) Definition 5 (Rényi Differential Privacy). A randomized mechanismM is said to satisfy ε-Rényi differential privacy of order λ, or (λ, ε)-RDP for short, if for any adjacent datasets d, d′ ∈ D: Dλ(M(d) ||M(d′)) = 1 λ− 1 logEx∼M(d) [( Pr[M(d) = x] Pr[M(d′) = x] )λ−1] ≤ ε (6) Theorem 2 (From RDP to DP). If a randomized mechanismM guarantees (λ, ε)-RDP, then it also satisfies (ε+ log(1/δ)λ−1 , δ)-differential privacy for any δ ∈ (0, 1). Building upon the moments accountant and RDP techniques, Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017) provides a flexible approach to training machine learning models with strong privacy guarantees. Precisely, rather than directly learning from labeled private data, the model that gets released instead learns from unlabeled public data by querying a teacher ensemble for predicted labels. Models in the ensemble are themselves trained on disjoint partitions of the private dataset, while privacy guarantees are enabled by applying the Laplace mechanism to the ensemble’s aggregated label counts. Coupled with data-dependent privacy analysis, PATE achieves a tighter estimate of the privacy loss associated with label queries, especially when the consensus among teacher models is strong. Given this motivation, the follow-up work of PATE (Papernot et al., 2018) further improves the privacy bound both by leveraging a more concentrated noise distribution to strengthen consensus and by rejecting queries that lack consensus. C MORE BACKGROUND ON ACTIVE LEARNING Active learning, sometimes referred to as query learning, exploits the intuition that machine learning algorithms will be able to learn more efficiently if they can actively select the data from which they learn. For certain supervised learning tasks, this insight is of particularly important implications, as labeled data rarely exists in abundance and data labeling can be very demanding (Settles, 2009). In order to pick queries that will most likely contribute to model learning, various pool sampling methods have been proposed to estimate the informativeness of unlabeled samples. Uncertainty-based approaches (Lewis & Gale, 1994), such as margin sampling and entropy sampling, typically achieve a satisfactory trade-off between sample utility and computational efficiency. We also explore a core-set approach to active learning using greedy-k-center sampling (Sener & Savarese, 2017). Definition 6 (Margin Sampling (Scheffer et al., 2001)). Given an unlabeled dataset d and a classification model with conditional label distribution Pθ(y |x), margin sampling outputs the most informative sample: x∗ = arg min x∈d Pθ(ŷ1 |x)− Pθ(ŷ2 |x) (7) where ŷ1 and ŷ2 stand for the most and second most probable labels for x, according to the model. Definition 7 (Entropy Sampling). Using the setting and notations in Definition 6, margin sampling can be generalized by using entropy (Shannon, 1948) as an uncertainty measure as follows: x∗ = arg max x∈d − ∑ i Pθ(yi |x) logPθ(yi |x) (8) where yi ranges over all possible labels. Definition 8 (Greedy-K-center Sampling). We aim to solve the k-center problem defined by Farahani & Hekmatfar (2009), which is, intuitively, the problem of picking k center points that minimize the largest distance between a data point and its nearest center. Formally, this goal is defined as min S:|S∪D|≤k max i min j∈S∪D ∆(xi,xj) (9) where D is the current training set and S is our new chosen center points. This definition can can be solved greedily as shown in (Sener & Savarese, 2017). D MORE BACKGROUND ON FAIRNESS Due to the imbalance in sample quantity and learning complexity, machine learning models may have disparate predictive performance over different classes or demographic groups, resulting in unfair treatment of certain population. To better capture this phenomenon and introduce tractable countermeasures, various fairness-related criteria have been proposed, including balanced accuracy, demographic parity, equalized odds (Hardt et al., 2016), etc. Definition 9 (Balanced Accuracy). Balanced accuracy captures model utility in terms of both accuracy and fairness. It is defined as the average of recall scores obtained on all classes. Among the criteria that aim to alleviate discrimination against certain protected attributes, equalized odds and equal opportunity Hardt et al. (2016) are of particular research interests. Definition 10 (Equalized Odds). A machine learning model is said to guarantee equalized odds with respect to protected attribute A and ground truth label Y if its prediction Ŷ and A are conditionally independent given Y . In the case of binary random variables A, Y, Ŷ , this is equivalent to: Pr [ Ŷ = 1 |A = 0, Y = y ] = Pr [ Ŷ = 1 |A = 1, Y = y ] , y ∈ {0, 1} (10) To put it another way, equalized odds requires the model to have equal true positive rates and equal false positive rates across the two demographic groups A = 0 and A = 1. Definition 11 (Equal Opportunity). Equal opportunity is a relaxation of equalized odds that requires non-discrimination only within a specific outcome group, often referred to as the advantaged group. Using previous notations, the binary case with advantaged group Y = 1 is equivalent to: Pr [ Ŷ = 1 |A = 0, Y = 1 ] = Pr [ Ŷ = 1 |A = 1, Y = 1 ] (11) E PROOF OF CONFIDENTIALITY Here we prove that our protocol described in the main body does not reveal anything except the final noised result to Pi∗ . In can be proven in the standard real-world ideal-world paradigm, where the ideal functionality takes inputs from all parties and sends the final results to Pi∗ . We use A to denote the set of corrupted parties. Below, we describe the simulator (namely S). The simulator strategy depends on if i∗ is corrupted. If i∗ ∈ A, our simulator works as below: 1.a) The simulator simulates what honest parties would do. 1.b) For each i /∈ A, S sends fresh encryption of a random ri to Pi∗ . 1.c) For each i /∈ A, S sends random si to Pi∗ on be half of the 2PC functionality between Pi and Pi∗ . 2-3 S sends the output of the whole computation to Pi∗ on behalf of the 2PC functionality between PG and Pi∗ If i∗ /∈ A, our simulator works as below: 1.a) If i∗ /∈ A, for each i ∈ A, S computes a fresh encryption of zero and sends it to Pi on behalf of Pi∗ . 1.b) The simulator simulates what honest parties would do. 1.c) For each i ∈ A, S sends random ŝi to Pi on behalf of the 2PC functionality between Pi and Pi∗ . 2-3 The simulator simulates what honest parties would do. Assuming that the underlying encryption scheme is CPA secure and that 2PC protocols used in step 1, 2 and 3 are secure with respect to standard definitions (i.e., reveals nothing beyond the outputs), our simulation itself is perfect. F DETAILS ON EXPERIMENTAL SETUP F.1 MNIST AND FASHION-MNIST We use the same setup as for CIFAR10 and SVHN datasets with the following adjustments. We select K = 250 as the default number of parties. For the imbalanced classes we select classes 1 and 2 for MNIST as well as Trouser and Pullover for Fashion-MNIST. We use the Gaussian noise with σ = 40 (similarly to SVHN). We are left with 1, 000 evaluation data points from the test set (similarly to CIFAR10). We fix the default value of = 2.35 for MNIST and = 3.89 for Fashion-MNIST. We use a variant of the LeNet architecture. F.2 DETAILS ON ARCHITECTURES To train the private models on subsets of datasets, we downsize the standard architectures, such as VGG-16 or ResNet-18. Below is the detailed list of layers in each of the architectures used (generated using torchsummary). The diagram for ResNet-10 also includes skip connections and convolutional layers for adjusting the sizes of feature maps. VGG-7 for SVHN: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 ReLU-3 [-1, 64, 32, 32] 0 MaxPool2d-4 [-1, 64, 16, 16] 0 Conv2d-5 [-1, 128, 16, 16] 73,728 BatchNorm2d-6 [-1, 128, 16, 16] 256 ReLU-7 [-1, 128, 16, 16] 0 MaxPool2d-8 [-1, 128, 8, 8] 0 Conv2d-9 [-1, 256, 8, 8] 294,912 BatchNorm2d-10 [-1, 256, 8, 8] 512 ReLU-11 [-1, 256, 8, 8] 0 Conv2d-12 [-1, 256, 8, 8] 589,824 BatchNorm2d-13 [-1, 256, 8, 8] 512 ReLU-14 [-1, 256, 8, 8] 0 MaxPool2d-15 [-1, 256, 4, 4] 0 Conv2d-16 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-17 [-1, 512, 4, 4] 1,024 ReLU-18 [-1, 512, 4, 4] 0 Conv2d-19 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-20 [-1, 512, 4, 4] 1,024 ReLU-21 [-1, 512, 4, 4] 0 Linear-22 [-1, 10] 5,130 ================================================================ Total params: 4,507,722 Params size MB: 17.20 ---------------------------------------------------------------- ResNet-10: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 Conv2d-3 [-1, 64, 32, 32] 36,864 BatchNorm2d-4 [-1, 64, 32, 32] 128 Conv2d-5 [-1, 64, 32, 32] 36,864 BatchNorm2d-6 [-1, 64, 32, 32] 128 BasicBlock-7 [-1, 64, 32, 32] 0 Conv2d-8 [-1, 128, 16, 16] 73,728 BatchNorm2d-9 [-1, 128, 16, 16] 256 Conv2d-10 [-1, 128, 16, 16] 147,456 BatchNorm2d-11 [-1, 128, 16, 16] 256 Conv2d-12 [-1, 128, 16, 16] 8,192 BatchNorm2d-13 [-1, 128, 16, 16] 256 BasicBlock-14 [-1, 128, 16, 16] 0 Conv2d-15 [-1, 256, 8, 8] 294,912 BatchNorm2d-16 [-1, 256, 8, 8] 512 Conv2d-17 [-1, 256, 8, 8] 589,824 BatchNorm2d-18 [-1, 256, 8, 8] 512 Conv2d-19 [-1, 256, 8, 8] 32,768 BatchNorm2d-20 [-1, 256, 8, 8] 512 BasicBlock-21 [-1, 256, 8, 8] 0 Conv2d-22 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-23 [-1, 512, 4, 4] 1,024 Conv2d-24 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-25 [-1, 512, 4, 4] 1,024 Conv2d-26 [-1, 512, 4, 4] 131,072 BatchNorm2d-27 [-1, 512, 4, 4] 1,024 BasicBlock-28 [-1, 512, 4, 4] 0 Linear-29 [-1, 10] 5,130 ================================================================ Total params: 4,903,242 Params size MB: 18.70 ---------------------------------------------------------------- LeNet style architecture for MNIST: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 20, 24, 24] 520 MaxPool2d-2 Conv2d-3 [-1, 50, 8, 8] 25,050 MaxPool2d-4 Linear-5 [-1, 500] 400,500 ReLU-6 Linear-7 [-1, 10] 5,010 ================================================================ Total params: 431,080 Trainable params: 431,080 Non-trainable params: 0 ---------------------------------------------------------------- Input size MB: 0.00 Forward/backward pass size MB: 0.12 Params size MB: 1.64 Estimated Total Size MB: 1.76 ---------------------------------------------------------------- G ADDITIONAL EXPERIMENTS AND FIGURES Number of parties 150 200 250 300 400 Accuracy gain (%) 4.11 3.33 4.50 4.69 8.39 Best ε 4.50 2.50 2.35 2.00 1.63
1. What are the strengths and weaknesses of the proposed federated system for classification? 2. How does the system protect the privacy of the training data and the sample to be classified? 3. What are the limitations of the statistical security used in the system? 4. Can the system be improved by using secret sharing modulo an integer? 5. Is the term "collaborative learning" appropriate for the proposed protocol? 6. Are there any minor issues with the presentation of the paper that should be addressed?
Review
Review Summary: The authors combine several cryptographic techniques to create a federated systems that allows several entities to run classification against all the model held be the participants without revealing information in the process. In particular, the sample to be classified is not revealed to any other party, and differential privacy is used to protect the training data that was used to train the models. A central semi-honest coordinator is used to aggregate the results and add the differential privacy without learning any private information. Pros: The strength of this works lie in combining relevant techniques and to show experimentally that the resulting system does improve over using a local model both when the training is distributed evenly or in skewed manner while taking privacy considerations into account. Cons: From a cryptographic point of view, the combination of techniques is somewhat expectable. I'm wondering about the low statistical security ( 2 − 23 ). This seems to be related to the usage of (unbounded) integer secret sharing. Would it be possible to use secret sharing modulo an integer, in which case the security could be perfect? I think it would be easier to follow if steps 1-3 were combined in the description because they all take between the same pairs of parties. The exact techniques used don't seem to matter as long as the output secret sharing is the desired result, namely the one-hot vector. I find the term collaborative learning somewhat overblown because the proposed protocol only runs classification collaboratively. Overall: Despite the points above, I'm in favor of acceptance because the paper seems to improve on previous work, and because it is written very well. Minor issues: 3.3: leakeage 4.1: odd juxtaposition in the formatting of "arg max" and "sum" Figure 3: very hard to read in black-and-white
ICLR
Title CaPC Learning: Confidential and Private Collaborative Learning Abstract Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1 N/A Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1 1 INTRODUCTION The predictions of machine learning (ML) systems often reveal private information contained in their training data (Shokri et al., 2017; Carlini et al., 2019) or test inputs. Because of these limitations, legislation increasingly regulates the use of personal data (Mantelero, 2013). The relevant ethical ∗Equal contributions, authors ordered alphabetically. †Work done while the author was at Vector Institute. ‡Equal contributions, authors ordered alphabetically. 1Code is available at: https://github.com/cleverhans-lab/capc-iclr. to evaluate Enc(q) onMi and outputs encrypted logits Enc(ri). 1b Each answering party, Pi, generates a random vector r̂i, and sends Enc(ri − r̂i) to the querying party, Pi∗ , who decrypts to get ri − r̂i. 1c Each answering party Pi runs Yao’s garbled circuit protocol (Yi) with querying party Pi∗ to get si for Pi∗ and ŝi for Pi s.t. si + ŝi is the one-hot encoding of argmax of logits. 2 Each answering party sends ŝi to the privacy guardian (PG). The PG sums ŝi from each Pi and adds Laplacian or Gaussian noise for DP. The querying party sums si from each Yi computation. 3 The PG and the querying party run Yao’s garbled circuit Ys to obtain argmax of querying party and PG’s noisy share. The label is output to the querying party. concerns prompted researchers to invent ML algorithms that protect the privacy of training data and confidentiality of test inputs (Abadi et al., 2016; Konečnỳ et al., 2016; Juvekar et al., 2018). Yet, these algorithms require a large dataset stored either in a single location or distributed amongst billions of participants. This is the case for example with federated learning (McMahan et al., 2017). Prior algorithms also assume that all parties are collectively training a single model with a fixed architecture. These requirements are often too restrictive in practice. For instance, a hospital may want to improve a medical diagnosis for a patient using data and models from other hospitals. In this case, the data is stored in multiple locations, and there are only a few parties collaborating. Further, each party may also want to train models with different architectures that best serve their own priorities. We propose a new strategy that lets fewer heterogeneous parties learn from each other collaboratively, enabling each party to improve their own local models while protecting the confidentiality and privacy of their data. We call this Confidential and Private Collaborative (CaPC) learning. Our strategy improves on confidential inference (Boemer, 2020) and PATE, the private aggregation of teacher ensembles (Papernot et al., 2017). Through structured applications of these two techniques, we design a strategy for inference that enables participants to operate an ensemble of heterogeneous models, i.e. the teachers, without having to explicitly join each party’s data or teacher model at a single location. This also gives each party control at inference, because inference requires the agreement and participation of each party. In addition, our strategy provides measurable confidentiality and privacy guarantees, which we formally prove. We use the running example of a network of hospitals to illustrate our approach. The hospitals participating in CaPC protocol need guarantees on both confidentiality (i.e., data from a hospital can only be read by said hospital) and privacy (i.e., no hospital can infer private information about other hospitals’ data by observing their predictions). First, one hospital queries all the other parties over homomorphic encryption (HE), asking them to label an encrypted input using their own teacher models. This can prevent the other hospitals from reading the input (Boemer et al., 2019), an improvement over PATE, and allows the answering hospitals to provide a prediction to the querying hospital without sharing their teacher models. The answering hospitals use multi-party computation (MPC) to compute an aggregated label, and add noise during the aggregation to obtain differential privacy guarantees (Dwork et al., 2014). This is achieved by a privacy guardian (PG), which then relays the aggregated label to the querying hospital. The PG only needs to be semi-trusted: we operate under the honest-but-curious assumption. The use of MPC ensures that the PG cannot decipher each teacher model’s individual prediction, and the noise added via noisy argmax mechanism gives differential privacy even when there are few participants. This is a significant advantage over prior decentralized approaches like federated learning, which require billions of participants to achieve differential privacy, because the sensitivity of the histogram used in our aggregation is lower than that of the gradients aggregated in federated learning. Unlike our approach, prior efforts involving few participants thus had to prioritize model utility over privacy and only guarantee confidentiality (Sheller et al., 2020). Finally, the querying hospital can learn from this confidential and private label to improve their local model. Since the shared information is a label rather than a gradient, as used by federated learning, CaPC participants do not need to share a common model architecture; in fact, their architectures can vary throughout the participation in the protocol. This favors model development to a degree which is not possible in prior efforts such as federated learning. We show how participants can instantiate various forms of active and online learning with the labels returned by our protocol: each party participating in the CaPC protocol may (a) identify deficiencies of its model throughout its deployment and (b) finetune the model with labels obtained by interacting with other parties. Intuitively, we achieve the analog of a doctor querying colleagues for a second opinion on a difficult diagnostic, without having to reveal the patient’s medical condition. This protocol leads to improvements in both the accuracy and fairness (when there is a skew in the data distribution of each participating hospital) of model predictions for each of the CaPC participants. To summarize, our contributions are the following: • We introduce CaPC learning: a confidential and private collaborative learning platform that provides both confidentiality and privacy while remaining agnostic to ML techniques. • Through a structured application of homomorphic encryption, secure MPC, and private aggregation, we design a protocol for CaPC. We use two-party deep learning inference and design an implementation of the noisy argmax mechanism with garbled circuits. • Our experiments on SVHN and CIFAR10 demonstrate that CaPC enables participants to collaborate and improve the utility of their models, even in the heterogeneous setting where the architectures of their local models differ, and when there are only a few participants. • Further, when the distribution of data drifts across participating parties, we show that CaPC significantly improves fairness metrics because querying parties benefit from knowledge learned by other parties on different data distributions, which is distilled in their predictions. • We release the source code for reproducing all our experiments. 2 BACKGROUND Before introducing CaPC, we first go over elements of cryptography and differential privacy that are required to understand it. Detailed treatment of these topics can be found in Appendices A and B. 2.1 CRYPTOGRAPHIC PRELIMINARIES FOR CONFIDENTIALITY The main cryptographic tool used in CaPC is secure multi-party computation (MPC) (Yao, 1986). MPC allows a set of distrusting parties to jointly evaluate a function on their input without revealing anything beyond the output. In general, most practical MPC protocols can be classified into two categories: 1) generic MPC protocols that can compute any function with the above security goal (Malkhi et al., 2004); and 2) specialized MPC protocols that can be used to compute only selected functions (e.g., private set intersection (Pinkas et al., 2020), secure machine learning (Mohassel & Zhang, 2017)). Although specialized MPC protocols are less general, they are often more efficient in execution time. Protocols in both categories use similar cryptographic building blocks, including (fully) homomorphic encryption (Gentry, 2009), secret sharing (Shamir, 1979), oblivious transfer (Rabin, 2005), garbled circuits (Yao, 1986). To understand our protocol, it is not necessary to know all details about these cryptographic building blocks and thus we describe them in Appendix A.1. Our work uses these cryptographic preliminaries for secure computation at prediction time, unlike recent approaches, which explore new methods to achieving confidentiality at training time (Huang et al., 2020a;b). The cryptographic protocol designed in this paper uses a specialized MPC protocol for securely evaluating a private ML model on private data, and a generic two-party computation protocol to compute an argmax in different forms. For the generic two-party computation, we use a classical Yao’s garbled-circuit protocol that can compute any function in Boolean circuit. For secure classification of neural networks, our protocol design is flexible to work with most existing protocols (Boemer et al., 2020; 2019; Gilad-Bachrach et al., 2016; Mishra et al., 2020). Most existing protocols are different in how they handle linear layers (e.g. convolution) and non-linear layers (e.g. ReLU). For instance, one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation, or using classical MPC techniques with more communication but less computation. Other works (Juvekar et al., 2018) use a hybrid of both and thus enjoy further improvement in performance (Mishra et al., 2020). We discuss it in more details in Appendix A.2. 2.2 DIFFERENTIAL PRIVACY Differential privacy is the established framework for measuring the privacy leakage of a randomized algorithm (Dwork et al., 2006). In the context of machine learning, it requires the training algorithm to produce statistically indistinguishable outputs on any pair of datasets that only differ by one data point. This implies that an adversary observing the outputs of the training algorithm (e.g., the model’s parameters, or its predictions) can improve its guess at most by a bounded probability when inferring properties of the training data points. Formally, we have the following definition. Definition 1 (Differential Privacy). A randomized mechanism M with domain D and range R satisfies (ε, δ)-differential privacy if for any subset S ⊆ R and any adjacent datasets d, d′ ∈ D, i.e. ‖d− d′‖1 ≤ 1, the following inequality holds: Pr [M(d) ∈ S] ≤ eεPr [M(d′) ∈ S] + δ (1) In our work, we obtain differential privacy by post-processing the outputs of an ensemble of models with the noisy argmax mechanism of Dwork et al. (2014) (for more details on differential privacy, please refer to Appendix B), à la PATE (Papernot et al., 2017). We apply the improved analysis of PATE (Papernot et al., 2018) to compute the privacy guarantees obtained (i.e., a bound on ε). Our technique differs from PATE in that each of the teacher models is trained by different parties whereas PATE assumes a centralized learning setting where all of the training and inference is performed by a single party. Note that our technique is used at inference time, which differs from recent works in differential privacy that compare neuron pruning during training with mechanisms satisfying differential privacy (Huang et al., 2020c). We use cryptography to securely decentralize computations. 3 THE CAPC PROTOCOL We now introduce our protocol for achieving both confidentiality and privacy in collaborative (CaPC) learning. To do so, we formalize and generalize our example of collaborating hospitals from Section 1. 3.1 PROBLEM DESCRIPTION A small number of parties {Pi}i∈[1,K], each holding a private dataset Di = {(xj , yj or∅)j∈[1,Ni]} and capable of fitting a predictive modelMi to it, wish to improve the utility of their individual models via collaboration. Due to the private nature of the datasets in question, they cannot directly share data or by-products of data (e.g., model weights) with each other. Instead, they will collaborate by querying each other for labels of the inputs about which they are uncertain. In the active learning paradigm, one party Pi∗ poses queries in the form of data samples x and all the other parties {Pi}i 6=i∗ together provide answers in the form of predicted labels ŷ. Each model {Mi}i∈[1,K] can be exploited in both the querying phase and the answering phase, with the querying party alternating between different participants {Pi}i∈[1,K] in the protocol. Threat Model. To obtain the strong confidentiality and privacy guarantees that we described, we require a semi-trusted third party called the privacy guardian (PG). We assume that the PG does not collude with any party and that the adversary can corrupt any subset of C parties {Pi}i∈[1,C]. When more than one party gets corrupted, this has no impact on the confidentiality guarantee, but the privacy budget obtained will degrade by a factor proportional to C because the sensitivity of the aggregation mechanism increases (see Section 3.3). We work in the honest-but-curious setting, a commonly adopted assumption in cryptography which requires the adversary to follow the protocol description correctly but will try to infer information from the protocol transcript. 3.2 CAPC PROTOCOL DESCRIPTION Our protocol introduces a novel formulation of the private aggregation of teachers, which implements two-party confidential inference and secret sharing to improve upon the work of Papernot et al. (2017) and guarantee confidentiality. Recall that the querying party Pi∗ initiates the protocol by sending an encrypted input x to all answering parties Pi, i 6= i∗. We use sk and pk to denote the secret and public keys owned by party Pi∗ . The proposed protocol consists of the following steps: 1. For each i 6= i∗, Pi (with model parametersMi as its input) and Pi∗ (with x, sk, pk as its input) run a secure two-party protocol. As the outcome, Pi obtains ŝi and Pi∗ obtains si such that si + ŝi = OneHot(arg max(ri)) where ri are the predicted logits. This step could be achieved by the following: a) Pi∗ and Pi run a secure two-party ML classification protocol such that Pi∗ learns nothing while Pi learns Encpk(ri), where ri are the predicted logits. b) Pi generates a random vector r̂i , performs the following computation on the encrypted data Encpk(ri)− Encpk(r̂i) = Encpk(ri − r̂i), and sends the encrypted difference to Pi∗ , who decrypts and obtains (ri − r̂i). c) Pi (with r̂i as input) and Pi∗ (with ri − r̂i as input) engage in Yao’s two-party garbledcircuit protocol to obtain vector si for Pi∗ and vector ŝi for Pi, such that si + ŝi = OneHot(arg max(ri)). 2. Pi sends ŝi to the PG. The PG computes ŝ = ∑ i 6=i∗ ŝi + DPNoise( ), where DPNoise() is element-wise Laplacian or Gaussian noise whose variance is calibrated to obtain a desired differential privacy guarantee ε; whereas Pi∗ computes s = ∑ i6=i∗ si. 3. The PG and Pi∗ engage in Yao’s two-party garbled-circuit protocol for computing the argmax: Pi∗ gets arg max(ŝ + s) and the PG gets nothing. Next, we elaborate on the confidentiality and privacy guarantees achieved by CaPC. 3.3 CONFIDENTIALITY AND DIFFERENTIAL PRIVACY GUARANTEES Confidentiality Analysis. We prove in Appendix E that the above protocol reveals nothing to Pi or the PG and only reveals the final noisy results to Pi∗ . The protocol is secure against a semi-honest adversary corrupting any subset of parties. Intuitively, the proof can be easily derived based on the security of the underlying components, including two-party classification protocol, secret sharing, and Yao’s garbled circuit protocol. As discussed in Section 4.1 and Appendix A.1, for secret sharing of unbounded integers, we need to make sure the random padding is picked from a domain much larger than the maximum possible value being shared. Given the above, a corrupted Pi∗ cannot learn anything aboutMi of the honest party due to the confidentiality guarantee of the secure classification protocol; similarly, the confidentiality of x against corrupted Pi is also protected. Intermediate values are all secretly shared (and only recovered within garbled circuits) so they are not visible to any party. Differential Privacy Analysis. Here, any potential privacy leakage in terms of differential privacy is incurred by the answering parties {Pi}i6=i∗ for their datasets {Di}i 6=i∗ , because these parties share the predictions of their models. Before sharing these predictions to Pi∗ , we follow the PATE protocol: we compute the histogram of label counts ŷ, then add Laplacian or Gaussian noise using a sensitivity of 1, and finally return the argmax of ŷσ to Pi∗ . Since Pi∗ only sees this noisily aggregated label, both the data-dependent and data-independent differential privacy analysis of PATE apply to Pi∗ (Papernot et al., 2017; 2018). Thus, when there are enough parties with high consensus, we can obtain a tighter bound on the privacy budget as the true plurality will more likely be returned (refer to Appendix B for more details on how this is achieved in PATE). This setup assumes that only one answering party can be corrupted. If instead C parties are corrupted, the sensitivity of the noisy aggregation mechanism will be scaled by C and the privacy guarantee will deteriorate. There is no privacy leakage to the PG; it does not receive any part of the predictions from {Pi}i 6=i∗ . 4 EXPERIMENTS CaPC aims to improve the model utility of collaborating parties by providing them with new labelled data for training their respective local models. Since we designed the CaPC protocol with techniques for confidentiality (i.e., confidential inference and secret sharing) and differential privacy (i.e., private aggregation), our experiments consider the following three major dimensions: 1. How well does collaboration improve the model utility of all participating parties? 2. What requirements are there to achieve privacy and how can these be relaxed under different circumstances? What is the trade-off between the privacy and utility provided by CaPC? 3. What is the resulting computational cost for ensuring confidentiality? 4.1 IMPLEMENTATION We use the HE-transformer library with MPC (MP2ML) by Boemer (2020) in step 1a of our protocol for confidential two-party deep learning inference. To make our protocol flexible to any private inference library, not just those that return the label predicted by the model (HE-transformer only returns logits), we incorporate steps 1b and 1c of the protocol outside of the private inference library. The EMP toolkit (Wang et al., 2016) for generic two-party computation is used to compute the operations including argmax and sum via the garbled circuits. To secret share the encrypted values, we first convert them into integers over a prime field according to the CKKS parameters, and then perform secret sharing on that domain to obtain perfect secret sharing. We use the single largest logit value for eachMi obtained on its training set Di in plain text to calculate the necessary noise. 4.2 EVALUATION SETUP Collaboration. We use the following for experiments unless otherwise noted. We uniformly sample from the training set in use2, without replacement, to create disjoint partitions, Di, of equal size and identical data distribution for each party. We select K = 50 and K = 250 as the number of parties for CIFAR10 and SVHN, respectively (the number is larger for SVHN because we have more data). We select Q = 3 querying parties, Pi∗ , and similarly divide part of the test set into Q separate private pools for each Pi∗ to select queries, until their privacy budget of is reached (using Gaussian noise with σ = 40 on SVHN and 7 on CIFAR10). We are left with 1, 000 and 16, 032 evaluation data points from the test set of CIFAR10 and SVHN, respectively. We fix = 2 and 20 for SVHN and CIFAR10, respectively (which leads to ≈ 550 queries per party), and report accuracy on the evaluation set. Querying models are retrained on their Di plus the newly labelled data; the difference in accuracies is their accuracy improvement. We use shallower variants of VGG, namely VGG-5 and VGG-7 for CIFAR10 and SVHN, respectively, to accommodate the small size of each party’s private dataset. We instantiate VGG-7 with 6 convolutional layers and one final fully-connected layer, thus there are 7 functional layers overall. Similarly, VGG-5 has 4 convolutional layers followed by a fully connected layer. The ResNet-10 architecture starts with a single convolutional layer, followed by 4 basic blocks with 2 convolutional layers in each block, and ends with a fully-connected layer, giving 10 functional layers in total. The ResNet-8 architecture that we use excludes the last basic block and increases the number of neurons in the last (fully-connected) layer. We present more details on architectures in Appendix F.2. We first train local models for all parties using their non-overlapping private datasets. Next, we run the CaPC protocol to generate query-answer pairs for each querying party. Finally, we retrain the local model of each querying party using the combination of their original private dataset and the newly obtained query-answer pairs. We report the mean accuracy and class-specific accuracy averaged over 5 runs for all retrained models, where each uses a different random seed. Heterogeneity and Data Skew. Where noted, our heterogeneous experiments (recall that this is a newly applicable setting that CaPC enables) use VGG-7, ResNet-8 and ResNet-10 architectures for K 3 parties, each. One model of each architecture is used for each of Q = 3 querying parties. Our data skew experiments use 80% less data samples for the classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and 90% less data for the classes 1 and 2 on SVHN. In turn, unfair ML algorithms perform worse on these specific classes, leading to worse balanced accuracy (see Appendix D). We adopt balanced accuracy instead of other fairness metrics because the datasets we use have no sensitive attributes, making them inapplicable. We employ margin, entropy, and greedy k-center active learning strategies 2For the SVHN dataset, we combine its original training set and extra set to get a larger training set. (described in Appendix C) to encourage ML algorithms to sample more queries from regimes that have been underrepresented and to improve their fairness performance. 4.3 COLLABORATION ANALYSIS We first investigate the benefits of collaboration for improving each party’s model performance in several different settings, namely: homogeneous and heterogeneous model architectures across querying and answering parties, and uniform and non-uniform data sampling for training data. From these experiments, we observe: increased accuracy in both homogeneous settings and heterogeneous settings to all model architectures (Section 4.3.1) and improved balanced accuracy when there is data skew between parties, i.e., non-uniform private data (Section 4.3.2). 4.3.1 UNIFORMLY SAMPLED PRIVATE DATA The first setting we consider is a uniform distribution of data amongst the parties—there is no data drift among parties. Our set up for the uniform data distribution experiments is detailed in Section 4.2. We evaluate the per-class and overall accuracy before and after CaPC in both homogeneous and heterogeneous settings on the CIFAR10 and SVHN datasets. In Figure 2, we see there is a consistent increase in accuracy for each class and overall in terms of mean accuracy across all parties on the test sets. We observe these improvements in both the homogeneous and heterogeneous settings for both datasets tested. As demonstrated in Figure 2, there is a greater climb in mean accuracy for the heterogeneous setting than the homogeneous setting on SVHN. Figures 5, 6, and 7 provide a breakdown of the benefits obtained by each querying party. We can see from these figures that all querying parties observe an increase in overall accuracy in heterogeneous and homogeneous settings with both datasets; additionally, the jump in accuracy is largely constant between different model architectures. In only 6.67% of all cases were any class-specific accuracies degraded, but they still showed a net increase in overall model accuracy. 4.3.2 NON-UNIFORMLY SAMPLED PRIVATE DATA In this section, we focus our analysis on two types of data skew between parties: varying size of data per class and total size of data provided; the setup is described in Section 4.2. To analyze data skew, we explore the balanced accuracy (which measures mean recall on a per-class basis, see Appendix D). We use balanced accuracy in order to investigate aggregate fairness gains offered by CaPC. Random sampling from non-uniform distributions leads to certain pitfalls: e.g., underrepresented classes are not specifically targeted in sampling. Thus, we additionally utilize active learning techniques, namely entropy, margin, and greedy-k-center (see Definitions 6-8 in Appendix C), and analyze balanced accuracy with each strategy. In Figure 3, we see that CaPC has a significant impact on the balanced accuracy when there is data skew between the private data of participating parties. Even random sampling can drastically improve balanced accuracy. Leveraging active learning techniques, we can achieve additional benefits in balanced accuracy. In particular, we observe that entropy and margin sampling achieves the greatest improvement over random sampling in per-class accuracy for the less represented classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and classes 1 and 2 on SVHN. These enhancements can be explained by the underlying mechanisms of margin and entropy sampling because the less-represented classes have a higher margin/entropy; the queries per class for each method are shown in Figure 9. Through these experiments, we show that in data skew settings, the CaPC protocol can significantly improve the fair performance of models (as measured by balanced accuracy), especially when combined with active learning techniques. Note that we see similar trends with (normal) accuracy as well. 4.4 PRIVACY VERSUS UTILITY We now study the trade-off between privacy and utility of our obtained models. Recall that we add Gaussian (or Laplacian) noise to the aggregate of predicted labels of all parties. Under the uniform setting, we choose the standard deviation σ by performing a (random) grid search and choosing the highest noise before a significant loss in accuracy is observed. In doing so, each query uses minimal ε while maximizing utility. Figure 11 in Appendix F shows a sample plot for K = 250 models. For more details on how ε is calculated, please refer to Appendix B. As we increase the number of parties, we can issue more queries for a given privacy budget (ε) which leads to a higher accuracy gain. In Figure 4, we report the accuracy gain achieved using CaPC with various numbers of parties, K. With a fixed total dataset size, increasing the number of parties decreases their training data size, leading to worse performing models. These models see the largest benefit from CaPC but, importantly, we always see a net improvement across all values of K. Number of parties 150 200 250 300 400 Accuracy gain (%) 0.62 1.45 2.39 3.07 3.87 Best ε 3.50 3.32 2.60 2.40 1.91 4.5 COMPUTATIONAL COSTS OF CONFIDENTIALITY The incorporation of confidentiality in CaPC increases computational costs. We segment the analysis of computational overhead of CaPC into three parts corresponding to sequential steps in the protocol: (1) inference, (2) secret sharing between each querying and answering party, and (3) secret sharing between the querying party and the PG. Each of these steps is analyzed in terms of the wall-clock time (in seconds). We use the default encryption setting in HE-transformer and vary the modulus range, N , which denotes the max value of a given plain text number to increase the maximum security level possible. HE-transformer only supports inference on CPUs and is used in step (1). Step (1) with neural network inference using MPC incurs the highest CPU and network costs (see Table 1 and Figure 13 in Appendix F). Even the base level of security increases computational cost by 100X, and high security levels see increases up to 1000X, in comparison to the non-encrypted inference on CPU. Compared to step (1), the rest of the CaPC protocol incurs a negligible overhead to perform secret sharing. Overall, CaPC incurs only a low additional cost over the underlying MP2ML framework, as shown in Figure 13, which enables applicability and scalability as these tools progress. 5 DISCUSSION AND CONCLUSIONS CaPC is a secure and private protocol that protects both the confidentiality of test data and the privacy of training data, which are desired in applications like healthcare and finance. Our framework facilitates collaborative learning using heterogeneous model architectures and separate private datasets, even if the number of parties involved is small. It offers notable advantages over recent methods for learning with multiple participants, such as federated learning, which assumes training of a single fixed model architecture. CaPC does not assume a homogeneous model architecture and allows parties to separately and collaboratively train different models optimized for their own purposes. Federated learning also requires a large number of parties while CaPC provides gains in accuracy with significantly fewer participants, even in contexts where each party already has a model with high accuracy. Notably, CaPC incurs low overhead on top of underlying tools used for secure neural network inference. Through our experiments, we also demonstrate that CaPC facilitates collaborative learning even when there exists non i.i.d (highly skewed) private data among parties. Our experiments show that CaPC improves on the fair performance of participating querying models as indicated by improvements in the balanced accuracy, a common fairness metric. Further, we observe a significant increase in per-class accuracy on less-represented classes on all datasets tested. Notably, CaPC is easily configured to leverage active learning techniques to achieve additional fairness improvement gains or to learn from other heterogeneous models trained with fairness techniques, e.g., with synthetic minority oversampling (Chawla et al., 2002). In future work, we look to analyzing the fairness implications of CaPC in contexts where there is discrimination over a private dataset’s sensitive attributes, not just class labels. In these cases, other fairness metrics like equalized odds and equal opportunity (see Appendix D) can be explored. We note some limitations of the proposed protocol. HE-transformer does not prevent leaking certain aspects of the model architecture, such as the type of non-linear activation functions and presence of MaxPooling layers. CaPC improves upon existing methods in terms of the necessary number of parties; however, it would be favorable to see this number decreased under 50 for better flexibility and applicability in practice. In the face of this last limitation, when there are few physical parties, we can generate a larger number of virtual parties for CaPC, where each physical party subdivides their private dataset into disjoint partitions and trains multiple local models. This would allow CaPC to tolerate more noise injected during aggregation and provide better privacy guarantees. Note that each physical party could select queries using a dedicated strong model instead of the weak models used for answering queries in CaPC. This setting is desirable in cases where separate models are required within a single physical party, for example, in a multi-national organization with per-country models. ACKNOWLEDGMENTS We would like to acknowledge our sponsors, who support our research with financial and in-kind contributions: Microsoft, Intel, CIFAR through the Canada CIFAR AI Chair and AI catalyst programs, NFRF through an Exploration grant, and NSERC COHESA Strategic Alliance. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Finally, we would like to thank members of CleverHans Lab for their feedback, especially: Tejumade Afonja, Varun Chandrasekaran, Stephan Rabanser, and Jonas Guan. A MORE BACKGROUND ON CRYPTOGRAPHY A.1 CRYPTOGRAPHIC BUILDING BLOCKS Homomorphic encryption. Homomorphic encryption defines an encryption scheme such that the encryption and decryption functions are homomorphic between plaintext and ciphertext spaces. Although it is known that fully homomorphic encryption can be constructed based on lattice-based assumptions, most applications only require a weaker version with bounded number of multiplications on each ciphertext. Schemes with this constraint are much more practical, including for example, BGV (Brakerski et al., 2014), CKKS (Cheon et al., 2017), etc. Secret sharing. Secret sharing denotes a scheme in which a datum, the secret, is shared amongst a group of parties by dividing the secret into parts such that each party only has one part, or ‘share’ of the secret. The secret can only be recovered if a certain number of parties conspire to combine their shares. It is easy to construct secret sharing modulo a positive integer. If the application does not allow modular operation, one can still achieve statistically secure secret sharing by using random shares that are much larger than the secret being shared (Evans et al., 2011). Oblivious transfer. Oblivious transfer involves two parties: the sending party and the receiving party. The sending party has two pieces of information, s0 and s1, and the receiver wants to receive sb, where b ∈ {0, 1}, such that the sending party cannot learn b and the receiving party cannot learn s¬b. In general, oblivious transfer requires public-key operations, however, it is possible to execute a large number of oblivious transfers with only a very small number of public-key operations based on oblivious transfer extension (Ishai et al., 2003). Garbled circuits. In Yao’s garbled circuit protocol for two-party computation, each of the two parties assumes a role, that of garbler or that of evaluator. The function f on which to compute each of the two parties’ inputs is described as a Boolean circuit. The garbler randomly generates aliases (termed labels) representing 0 and 1 in the Boolean circuit describing f and replaces the binary values with the generated labels for each wire in the circuit. At each gate in the circuit, which can be viewed as a truth table, the garbler uses the labels of each possible combination of inputs to encrypt the corresponding outputs, and permutes the rows of the truth table. The garbler then uses the generated labels for 0 and 1 to encode their own input data and sends these labels and the garbled Boolean circuit to the evaluator. The evaluator now converts their binary input data to the corresponding labels through a 1-2 oblivious transfer protocol with the garbler. After receiving the labels for their input, the evaluator evaluates the garbled circuit by trying to decrypt each row in the permutable truth tables at each gate using the input labels; only one row will be decryptable at each gate, which is the output label for the outgoing wire from the gate. The evaluator eventually finishes evaluating the garbled circuit and obtains the label for the output of the function f computed on the garbler’s and the evaluator’s input. The garbler then must provide the true value for the output label so that both parties can get the output. A.2 PROTECTING CONFIDENTIALITY USING MPC Neural networks present a challenge to secure multi-party computation protocols due to their unique structure and exploitative combination of linear computations and non-linear activation functions. Cryptographic inference with neural networks can be considered in two party computation case in which one party has confidential input for which they wish to obtain output from a model and the other party stores the model; in many cases the party storing the model also wishes that the model remains secure. Confidential learning and inference with neural networks typically uses homomorphic encryption (HE) or secure multi-party computation (MPC) methods. Many libraries support pure HE or MPC protocols for secure inference of neural networks; a comprehensive list can be viewed in (Boemer et al., 2020). Notably, libraries such as nGraph-HE (Boemer et al., 2019) and CryptoNets (GiladBachrach et al., 2016) provide pure homomorphic encryption solutions to secure neural network inference. nGraph-HE, an extension of graph compiler nGraph, allows secure inference of DNNs through linear computations at each layer using CKKS homomorphic encryption scheme (Cheon et al., 2017; Boemer et al., 2019). CryptoNets similarly permit confidential neural network inference using another leveled homomorphic encryption scheme, YASHE’ (Gilad-Bachrach et al., 2016). On the other hand, several libraries employing primarily MPC methods in secure NN inference frameworks rely on ABY, a tool providing support for common non-polynomial activation functions in NNs through use of both Yao’s GC and GMW. In DL contexts, while pure homomorphic encryption methods maintain model security, their failure to support common non-polynomial activation functions leads to leaking of pre-activation values (feature maps at hidden layers). Tools that use solely MPC protocols avoid leaking pre-activation values as they can guarantee data confidentiality on non-polynomial activation functions but may compromise the security of the model architecture by leaking activation functions or model structure. Recent works on secure NN inference propose hybrid protocols that combine homomorphic encryption schemes, and MPC methods to build frameworks that try to reduce leakages common in pure HE and MPC protocols. Among recent works that use hybrid protocols and do not rely on trusted third parties are Gazelle (Juvekar et al., 2018), Delphi (Mishra et al., 2020), and MP2ML (Boemer et al., 2020). Gazelle, Delphi and MP2ML largely support non-polynomial activation functions encountered in convolutional neural networks, such as maximum pooling and rectified linear unit (ReLU) operations. Gazelle introduced several improvements over previous methods for secure NN inference primarily relating to latency and confidentiality. In particular, Gazelle framework provides homomorphic encryption libraries with low latency implementations of algorithms for single instruction multiple data (SIMD) operations, ciphertext permutation, and homomorphic matrix and convolutional operations, pertinent to convolutional neural networks. Gazelle utilizes kernel methods to evaluate homomorphic operations for linear components of networks, garbled circuits to compute non-linear activation functions confidentially and additive secret sharing to quickly switch between these cryptographic protocols. Delphi builds on Gazelle, optimizing computation of both linear and non-linear com- putations in CNNs by secret sharing model weights in the pre-processing stage to speed up linear computations later, and approximating certain activation functions such as ReLU with polynomials. MP2ML employs nGraph-HE for homomorphic encryption and ABY framework for evaluation of non-linear functions using garbled circuits. B MORE BACKGROUND ON DIFFERENTIAL PRIVACY One of the compelling properties of differential privacy is that it permits the analysis and control of cumulative privacy cost over multiple consecutive computations. For instance, strong composition theorem (Dwork et al., 2010) gives a tight estimate of the privacy cost associated with a sequence of adaptive mechanisms {Mi}i∈I . Theorem 1 (Strong Composition). For ε, δ, δ′ ≥ 0, the class of (ε, δ)-differentially private mechanisms satisfies (ε′, kδ + δ′)-differential privacy under k-fold adaptive composition for: ε′ = ε √ 2k log(1/δ′) + kε(eε − 1) (2) To facilitate the evaluation of privacy leakage resulted by a randomized mechanismM, it is helpful to explicitly define its corresponding privacy loss cM and privacy loss random variableCM. Particularly, the fact thatM is (ε, δ)-differentially private is equivalent to a certain tail bound on CM. Definition 2 (Privacy Loss). Given a pair of adjacent datasets d, d′ ∈ D and an auxiliary input aux, the privacy loss cM of a randomized mechanismM evaluated at an outcome o ∈ R is defined as: cM(o | aux, d, d′) , log Pr[M(aux, d) = o] Pr[M(aux, d′) = o] (3) For an outcome o ∈ R sampled fromM(d), CM(aux, d, d′) takes the value cM(o | aux, d, d′). Based on the definition of privacy loss, Abadi et al. (Abadi et al., 2016) introduced the moments accountant to track higher-order moments of privacy loss random variable and achieved even tighter privacy bounds for k-fold adaptive mechanisms. Definition 3 (Moments Accountant). Given any adjacent datasets d, d′ ∈ D and any auxiliary input aux, the moments accountant of a randomized mechanismM is defined as: αM(λ) , max aux,d,d′ αM(λ | aux, d, d′) (4) where αM(λ | aux, d, d′) , logE[exp(λCM(aux, d, d′))] is obtained by taking the logarithm of the privacy loss random variable. As a natural relaxation to the conventional (ε, δ)-differential privacy, Rényi differential privacy (RDP) (Mironov, 2017) provides a more convenient and accurate approach to estimating privacy loss under heterogeneous composition. Definition 4 (Rényi Divergence). For two probability distributions P and Q defined over R, the Rényi divergence of order λ > 1 between them is defined as: Dλ(P ||Q) , 1 λ− 1 logEx∼Q [ (P (x)/Q(x))λ ] = 1 λ− 1 logEx∼P [ (P (x)/Q(x))λ−1 ] (5) Definition 5 (Rényi Differential Privacy). A randomized mechanismM is said to satisfy ε-Rényi differential privacy of order λ, or (λ, ε)-RDP for short, if for any adjacent datasets d, d′ ∈ D: Dλ(M(d) ||M(d′)) = 1 λ− 1 logEx∼M(d) [( Pr[M(d) = x] Pr[M(d′) = x] )λ−1] ≤ ε (6) Theorem 2 (From RDP to DP). If a randomized mechanismM guarantees (λ, ε)-RDP, then it also satisfies (ε+ log(1/δ)λ−1 , δ)-differential privacy for any δ ∈ (0, 1). Building upon the moments accountant and RDP techniques, Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017) provides a flexible approach to training machine learning models with strong privacy guarantees. Precisely, rather than directly learning from labeled private data, the model that gets released instead learns from unlabeled public data by querying a teacher ensemble for predicted labels. Models in the ensemble are themselves trained on disjoint partitions of the private dataset, while privacy guarantees are enabled by applying the Laplace mechanism to the ensemble’s aggregated label counts. Coupled with data-dependent privacy analysis, PATE achieves a tighter estimate of the privacy loss associated with label queries, especially when the consensus among teacher models is strong. Given this motivation, the follow-up work of PATE (Papernot et al., 2018) further improves the privacy bound both by leveraging a more concentrated noise distribution to strengthen consensus and by rejecting queries that lack consensus. C MORE BACKGROUND ON ACTIVE LEARNING Active learning, sometimes referred to as query learning, exploits the intuition that machine learning algorithms will be able to learn more efficiently if they can actively select the data from which they learn. For certain supervised learning tasks, this insight is of particularly important implications, as labeled data rarely exists in abundance and data labeling can be very demanding (Settles, 2009). In order to pick queries that will most likely contribute to model learning, various pool sampling methods have been proposed to estimate the informativeness of unlabeled samples. Uncertainty-based approaches (Lewis & Gale, 1994), such as margin sampling and entropy sampling, typically achieve a satisfactory trade-off between sample utility and computational efficiency. We also explore a core-set approach to active learning using greedy-k-center sampling (Sener & Savarese, 2017). Definition 6 (Margin Sampling (Scheffer et al., 2001)). Given an unlabeled dataset d and a classification model with conditional label distribution Pθ(y |x), margin sampling outputs the most informative sample: x∗ = arg min x∈d Pθ(ŷ1 |x)− Pθ(ŷ2 |x) (7) where ŷ1 and ŷ2 stand for the most and second most probable labels for x, according to the model. Definition 7 (Entropy Sampling). Using the setting and notations in Definition 6, margin sampling can be generalized by using entropy (Shannon, 1948) as an uncertainty measure as follows: x∗ = arg max x∈d − ∑ i Pθ(yi |x) logPθ(yi |x) (8) where yi ranges over all possible labels. Definition 8 (Greedy-K-center Sampling). We aim to solve the k-center problem defined by Farahani & Hekmatfar (2009), which is, intuitively, the problem of picking k center points that minimize the largest distance between a data point and its nearest center. Formally, this goal is defined as min S:|S∪D|≤k max i min j∈S∪D ∆(xi,xj) (9) where D is the current training set and S is our new chosen center points. This definition can can be solved greedily as shown in (Sener & Savarese, 2017). D MORE BACKGROUND ON FAIRNESS Due to the imbalance in sample quantity and learning complexity, machine learning models may have disparate predictive performance over different classes or demographic groups, resulting in unfair treatment of certain population. To better capture this phenomenon and introduce tractable countermeasures, various fairness-related criteria have been proposed, including balanced accuracy, demographic parity, equalized odds (Hardt et al., 2016), etc. Definition 9 (Balanced Accuracy). Balanced accuracy captures model utility in terms of both accuracy and fairness. It is defined as the average of recall scores obtained on all classes. Among the criteria that aim to alleviate discrimination against certain protected attributes, equalized odds and equal opportunity Hardt et al. (2016) are of particular research interests. Definition 10 (Equalized Odds). A machine learning model is said to guarantee equalized odds with respect to protected attribute A and ground truth label Y if its prediction Ŷ and A are conditionally independent given Y . In the case of binary random variables A, Y, Ŷ , this is equivalent to: Pr [ Ŷ = 1 |A = 0, Y = y ] = Pr [ Ŷ = 1 |A = 1, Y = y ] , y ∈ {0, 1} (10) To put it another way, equalized odds requires the model to have equal true positive rates and equal false positive rates across the two demographic groups A = 0 and A = 1. Definition 11 (Equal Opportunity). Equal opportunity is a relaxation of equalized odds that requires non-discrimination only within a specific outcome group, often referred to as the advantaged group. Using previous notations, the binary case with advantaged group Y = 1 is equivalent to: Pr [ Ŷ = 1 |A = 0, Y = 1 ] = Pr [ Ŷ = 1 |A = 1, Y = 1 ] (11) E PROOF OF CONFIDENTIALITY Here we prove that our protocol described in the main body does not reveal anything except the final noised result to Pi∗ . In can be proven in the standard real-world ideal-world paradigm, where the ideal functionality takes inputs from all parties and sends the final results to Pi∗ . We use A to denote the set of corrupted parties. Below, we describe the simulator (namely S). The simulator strategy depends on if i∗ is corrupted. If i∗ ∈ A, our simulator works as below: 1.a) The simulator simulates what honest parties would do. 1.b) For each i /∈ A, S sends fresh encryption of a random ri to Pi∗ . 1.c) For each i /∈ A, S sends random si to Pi∗ on be half of the 2PC functionality between Pi and Pi∗ . 2-3 S sends the output of the whole computation to Pi∗ on behalf of the 2PC functionality between PG and Pi∗ If i∗ /∈ A, our simulator works as below: 1.a) If i∗ /∈ A, for each i ∈ A, S computes a fresh encryption of zero and sends it to Pi on behalf of Pi∗ . 1.b) The simulator simulates what honest parties would do. 1.c) For each i ∈ A, S sends random ŝi to Pi on behalf of the 2PC functionality between Pi and Pi∗ . 2-3 The simulator simulates what honest parties would do. Assuming that the underlying encryption scheme is CPA secure and that 2PC protocols used in step 1, 2 and 3 are secure with respect to standard definitions (i.e., reveals nothing beyond the outputs), our simulation itself is perfect. F DETAILS ON EXPERIMENTAL SETUP F.1 MNIST AND FASHION-MNIST We use the same setup as for CIFAR10 and SVHN datasets with the following adjustments. We select K = 250 as the default number of parties. For the imbalanced classes we select classes 1 and 2 for MNIST as well as Trouser and Pullover for Fashion-MNIST. We use the Gaussian noise with σ = 40 (similarly to SVHN). We are left with 1, 000 evaluation data points from the test set (similarly to CIFAR10). We fix the default value of = 2.35 for MNIST and = 3.89 for Fashion-MNIST. We use a variant of the LeNet architecture. F.2 DETAILS ON ARCHITECTURES To train the private models on subsets of datasets, we downsize the standard architectures, such as VGG-16 or ResNet-18. Below is the detailed list of layers in each of the architectures used (generated using torchsummary). The diagram for ResNet-10 also includes skip connections and convolutional layers for adjusting the sizes of feature maps. VGG-7 for SVHN: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 ReLU-3 [-1, 64, 32, 32] 0 MaxPool2d-4 [-1, 64, 16, 16] 0 Conv2d-5 [-1, 128, 16, 16] 73,728 BatchNorm2d-6 [-1, 128, 16, 16] 256 ReLU-7 [-1, 128, 16, 16] 0 MaxPool2d-8 [-1, 128, 8, 8] 0 Conv2d-9 [-1, 256, 8, 8] 294,912 BatchNorm2d-10 [-1, 256, 8, 8] 512 ReLU-11 [-1, 256, 8, 8] 0 Conv2d-12 [-1, 256, 8, 8] 589,824 BatchNorm2d-13 [-1, 256, 8, 8] 512 ReLU-14 [-1, 256, 8, 8] 0 MaxPool2d-15 [-1, 256, 4, 4] 0 Conv2d-16 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-17 [-1, 512, 4, 4] 1,024 ReLU-18 [-1, 512, 4, 4] 0 Conv2d-19 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-20 [-1, 512, 4, 4] 1,024 ReLU-21 [-1, 512, 4, 4] 0 Linear-22 [-1, 10] 5,130 ================================================================ Total params: 4,507,722 Params size MB: 17.20 ---------------------------------------------------------------- ResNet-10: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 Conv2d-3 [-1, 64, 32, 32] 36,864 BatchNorm2d-4 [-1, 64, 32, 32] 128 Conv2d-5 [-1, 64, 32, 32] 36,864 BatchNorm2d-6 [-1, 64, 32, 32] 128 BasicBlock-7 [-1, 64, 32, 32] 0 Conv2d-8 [-1, 128, 16, 16] 73,728 BatchNorm2d-9 [-1, 128, 16, 16] 256 Conv2d-10 [-1, 128, 16, 16] 147,456 BatchNorm2d-11 [-1, 128, 16, 16] 256 Conv2d-12 [-1, 128, 16, 16] 8,192 BatchNorm2d-13 [-1, 128, 16, 16] 256 BasicBlock-14 [-1, 128, 16, 16] 0 Conv2d-15 [-1, 256, 8, 8] 294,912 BatchNorm2d-16 [-1, 256, 8, 8] 512 Conv2d-17 [-1, 256, 8, 8] 589,824 BatchNorm2d-18 [-1, 256, 8, 8] 512 Conv2d-19 [-1, 256, 8, 8] 32,768 BatchNorm2d-20 [-1, 256, 8, 8] 512 BasicBlock-21 [-1, 256, 8, 8] 0 Conv2d-22 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-23 [-1, 512, 4, 4] 1,024 Conv2d-24 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-25 [-1, 512, 4, 4] 1,024 Conv2d-26 [-1, 512, 4, 4] 131,072 BatchNorm2d-27 [-1, 512, 4, 4] 1,024 BasicBlock-28 [-1, 512, 4, 4] 0 Linear-29 [-1, 10] 5,130 ================================================================ Total params: 4,903,242 Params size MB: 18.70 ---------------------------------------------------------------- LeNet style architecture for MNIST: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 20, 24, 24] 520 MaxPool2d-2 Conv2d-3 [-1, 50, 8, 8] 25,050 MaxPool2d-4 Linear-5 [-1, 500] 400,500 ReLU-6 Linear-7 [-1, 10] 5,010 ================================================================ Total params: 431,080 Trainable params: 431,080 Non-trainable params: 0 ---------------------------------------------------------------- Input size MB: 0.00 Forward/backward pass size MB: 0.12 Params size MB: 1.64 Estimated Total Size MB: 1.76 ---------------------------------------------------------------- G ADDITIONAL EXPERIMENTS AND FIGURES Number of parties 150 200 250 300 400 Accuracy gain (%) 4.11 3.33 4.50 4.69 8.39 Best ε 4.50 2.50 2.35 2.00 1.63
1. What is the focus and contribution of the paper regarding confidentiality and privacy in collaborative learning? 2. What are the strengths and weaknesses of the proposed method compared to other works like InstaHide and TextHide? 3. Do you have any concerns or suggestions regarding the presentation of the paper, such as the inclusion of background information on differential privacy and fairness? 4. How does the reviewer assess the relevance and impact of the paper in its field?
Review
Review This work motivated by healthcare and finance where separate parties may wish to collaborate and learn from each other's data but are prevented from doing so due to privacy regulations. This paper propose Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentially and privacy in a collaborative setting. This work also discussed about fairness. I liked this part, since it seems very cool. However, I'm not convinced by the method in this work is better than InstaHide (I could be wrong). Minor comments In Section 2.1, this paper should be discussed, since it proposed a way to ``encrypt'' the images/texts. InstaHide: instance-hiding schemes for private distributed learning https://arxiv.org/abs/2010.02772 ICML 2020 Yangsibo Huang, Zhao Song, Kai Li, Sanjeev Arora. TextHide: Tackling Data Privacy in Language Understanding Tasks https://arxiv.org/abs/2010.06053 EMNLP 2020 Yangsibo Huang, Zhao Song, Danqi Chen, Kai Li, Sanjeev Arora In Section B, it lists many theorems/definition about differential privacy. In Section C, it list many backgrounds about sampling. In Section D., it list many definitions on Fairness. I don't quite see the point of having them in appendix, since none of them got mentioned in Appendix E, which is the proof of the main theory result in this paper. This paper is closely related differential privacy. I think this paper should also be mentioned somewhere. Privacy-preserving Learning via Deep Net Pruning https://arxiv.org/abs/2003.01876 Yangsibo Huang, Yushan Su, Sachin Ravi, Zhao Song, Sanjeev Arora, Kai Li.
ICLR
Title CaPC Learning: Confidential and Private Collaborative Learning Abstract Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1 N/A Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other’s data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multiparty computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.1 1 INTRODUCTION The predictions of machine learning (ML) systems often reveal private information contained in their training data (Shokri et al., 2017; Carlini et al., 2019) or test inputs. Because of these limitations, legislation increasingly regulates the use of personal data (Mantelero, 2013). The relevant ethical ∗Equal contributions, authors ordered alphabetically. †Work done while the author was at Vector Institute. ‡Equal contributions, authors ordered alphabetically. 1Code is available at: https://github.com/cleverhans-lab/capc-iclr. to evaluate Enc(q) onMi and outputs encrypted logits Enc(ri). 1b Each answering party, Pi, generates a random vector r̂i, and sends Enc(ri − r̂i) to the querying party, Pi∗ , who decrypts to get ri − r̂i. 1c Each answering party Pi runs Yao’s garbled circuit protocol (Yi) with querying party Pi∗ to get si for Pi∗ and ŝi for Pi s.t. si + ŝi is the one-hot encoding of argmax of logits. 2 Each answering party sends ŝi to the privacy guardian (PG). The PG sums ŝi from each Pi and adds Laplacian or Gaussian noise for DP. The querying party sums si from each Yi computation. 3 The PG and the querying party run Yao’s garbled circuit Ys to obtain argmax of querying party and PG’s noisy share. The label is output to the querying party. concerns prompted researchers to invent ML algorithms that protect the privacy of training data and confidentiality of test inputs (Abadi et al., 2016; Konečnỳ et al., 2016; Juvekar et al., 2018). Yet, these algorithms require a large dataset stored either in a single location or distributed amongst billions of participants. This is the case for example with federated learning (McMahan et al., 2017). Prior algorithms also assume that all parties are collectively training a single model with a fixed architecture. These requirements are often too restrictive in practice. For instance, a hospital may want to improve a medical diagnosis for a patient using data and models from other hospitals. In this case, the data is stored in multiple locations, and there are only a few parties collaborating. Further, each party may also want to train models with different architectures that best serve their own priorities. We propose a new strategy that lets fewer heterogeneous parties learn from each other collaboratively, enabling each party to improve their own local models while protecting the confidentiality and privacy of their data. We call this Confidential and Private Collaborative (CaPC) learning. Our strategy improves on confidential inference (Boemer, 2020) and PATE, the private aggregation of teacher ensembles (Papernot et al., 2017). Through structured applications of these two techniques, we design a strategy for inference that enables participants to operate an ensemble of heterogeneous models, i.e. the teachers, without having to explicitly join each party’s data or teacher model at a single location. This also gives each party control at inference, because inference requires the agreement and participation of each party. In addition, our strategy provides measurable confidentiality and privacy guarantees, which we formally prove. We use the running example of a network of hospitals to illustrate our approach. The hospitals participating in CaPC protocol need guarantees on both confidentiality (i.e., data from a hospital can only be read by said hospital) and privacy (i.e., no hospital can infer private information about other hospitals’ data by observing their predictions). First, one hospital queries all the other parties over homomorphic encryption (HE), asking them to label an encrypted input using their own teacher models. This can prevent the other hospitals from reading the input (Boemer et al., 2019), an improvement over PATE, and allows the answering hospitals to provide a prediction to the querying hospital without sharing their teacher models. The answering hospitals use multi-party computation (MPC) to compute an aggregated label, and add noise during the aggregation to obtain differential privacy guarantees (Dwork et al., 2014). This is achieved by a privacy guardian (PG), which then relays the aggregated label to the querying hospital. The PG only needs to be semi-trusted: we operate under the honest-but-curious assumption. The use of MPC ensures that the PG cannot decipher each teacher model’s individual prediction, and the noise added via noisy argmax mechanism gives differential privacy even when there are few participants. This is a significant advantage over prior decentralized approaches like federated learning, which require billions of participants to achieve differential privacy, because the sensitivity of the histogram used in our aggregation is lower than that of the gradients aggregated in federated learning. Unlike our approach, prior efforts involving few participants thus had to prioritize model utility over privacy and only guarantee confidentiality (Sheller et al., 2020). Finally, the querying hospital can learn from this confidential and private label to improve their local model. Since the shared information is a label rather than a gradient, as used by federated learning, CaPC participants do not need to share a common model architecture; in fact, their architectures can vary throughout the participation in the protocol. This favors model development to a degree which is not possible in prior efforts such as federated learning. We show how participants can instantiate various forms of active and online learning with the labels returned by our protocol: each party participating in the CaPC protocol may (a) identify deficiencies of its model throughout its deployment and (b) finetune the model with labels obtained by interacting with other parties. Intuitively, we achieve the analog of a doctor querying colleagues for a second opinion on a difficult diagnostic, without having to reveal the patient’s medical condition. This protocol leads to improvements in both the accuracy and fairness (when there is a skew in the data distribution of each participating hospital) of model predictions for each of the CaPC participants. To summarize, our contributions are the following: • We introduce CaPC learning: a confidential and private collaborative learning platform that provides both confidentiality and privacy while remaining agnostic to ML techniques. • Through a structured application of homomorphic encryption, secure MPC, and private aggregation, we design a protocol for CaPC. We use two-party deep learning inference and design an implementation of the noisy argmax mechanism with garbled circuits. • Our experiments on SVHN and CIFAR10 demonstrate that CaPC enables participants to collaborate and improve the utility of their models, even in the heterogeneous setting where the architectures of their local models differ, and when there are only a few participants. • Further, when the distribution of data drifts across participating parties, we show that CaPC significantly improves fairness metrics because querying parties benefit from knowledge learned by other parties on different data distributions, which is distilled in their predictions. • We release the source code for reproducing all our experiments. 2 BACKGROUND Before introducing CaPC, we first go over elements of cryptography and differential privacy that are required to understand it. Detailed treatment of these topics can be found in Appendices A and B. 2.1 CRYPTOGRAPHIC PRELIMINARIES FOR CONFIDENTIALITY The main cryptographic tool used in CaPC is secure multi-party computation (MPC) (Yao, 1986). MPC allows a set of distrusting parties to jointly evaluate a function on their input without revealing anything beyond the output. In general, most practical MPC protocols can be classified into two categories: 1) generic MPC protocols that can compute any function with the above security goal (Malkhi et al., 2004); and 2) specialized MPC protocols that can be used to compute only selected functions (e.g., private set intersection (Pinkas et al., 2020), secure machine learning (Mohassel & Zhang, 2017)). Although specialized MPC protocols are less general, they are often more efficient in execution time. Protocols in both categories use similar cryptographic building blocks, including (fully) homomorphic encryption (Gentry, 2009), secret sharing (Shamir, 1979), oblivious transfer (Rabin, 2005), garbled circuits (Yao, 1986). To understand our protocol, it is not necessary to know all details about these cryptographic building blocks and thus we describe them in Appendix A.1. Our work uses these cryptographic preliminaries for secure computation at prediction time, unlike recent approaches, which explore new methods to achieving confidentiality at training time (Huang et al., 2020a;b). The cryptographic protocol designed in this paper uses a specialized MPC protocol for securely evaluating a private ML model on private data, and a generic two-party computation protocol to compute an argmax in different forms. For the generic two-party computation, we use a classical Yao’s garbled-circuit protocol that can compute any function in Boolean circuit. For secure classification of neural networks, our protocol design is flexible to work with most existing protocols (Boemer et al., 2020; 2019; Gilad-Bachrach et al., 2016; Mishra et al., 2020). Most existing protocols are different in how they handle linear layers (e.g. convolution) and non-linear layers (e.g. ReLU). For instance, one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation, or using classical MPC techniques with more communication but less computation. Other works (Juvekar et al., 2018) use a hybrid of both and thus enjoy further improvement in performance (Mishra et al., 2020). We discuss it in more details in Appendix A.2. 2.2 DIFFERENTIAL PRIVACY Differential privacy is the established framework for measuring the privacy leakage of a randomized algorithm (Dwork et al., 2006). In the context of machine learning, it requires the training algorithm to produce statistically indistinguishable outputs on any pair of datasets that only differ by one data point. This implies that an adversary observing the outputs of the training algorithm (e.g., the model’s parameters, or its predictions) can improve its guess at most by a bounded probability when inferring properties of the training data points. Formally, we have the following definition. Definition 1 (Differential Privacy). A randomized mechanism M with domain D and range R satisfies (ε, δ)-differential privacy if for any subset S ⊆ R and any adjacent datasets d, d′ ∈ D, i.e. ‖d− d′‖1 ≤ 1, the following inequality holds: Pr [M(d) ∈ S] ≤ eεPr [M(d′) ∈ S] + δ (1) In our work, we obtain differential privacy by post-processing the outputs of an ensemble of models with the noisy argmax mechanism of Dwork et al. (2014) (for more details on differential privacy, please refer to Appendix B), à la PATE (Papernot et al., 2017). We apply the improved analysis of PATE (Papernot et al., 2018) to compute the privacy guarantees obtained (i.e., a bound on ε). Our technique differs from PATE in that each of the teacher models is trained by different parties whereas PATE assumes a centralized learning setting where all of the training and inference is performed by a single party. Note that our technique is used at inference time, which differs from recent works in differential privacy that compare neuron pruning during training with mechanisms satisfying differential privacy (Huang et al., 2020c). We use cryptography to securely decentralize computations. 3 THE CAPC PROTOCOL We now introduce our protocol for achieving both confidentiality and privacy in collaborative (CaPC) learning. To do so, we formalize and generalize our example of collaborating hospitals from Section 1. 3.1 PROBLEM DESCRIPTION A small number of parties {Pi}i∈[1,K], each holding a private dataset Di = {(xj , yj or∅)j∈[1,Ni]} and capable of fitting a predictive modelMi to it, wish to improve the utility of their individual models via collaboration. Due to the private nature of the datasets in question, they cannot directly share data or by-products of data (e.g., model weights) with each other. Instead, they will collaborate by querying each other for labels of the inputs about which they are uncertain. In the active learning paradigm, one party Pi∗ poses queries in the form of data samples x and all the other parties {Pi}i 6=i∗ together provide answers in the form of predicted labels ŷ. Each model {Mi}i∈[1,K] can be exploited in both the querying phase and the answering phase, with the querying party alternating between different participants {Pi}i∈[1,K] in the protocol. Threat Model. To obtain the strong confidentiality and privacy guarantees that we described, we require a semi-trusted third party called the privacy guardian (PG). We assume that the PG does not collude with any party and that the adversary can corrupt any subset of C parties {Pi}i∈[1,C]. When more than one party gets corrupted, this has no impact on the confidentiality guarantee, but the privacy budget obtained will degrade by a factor proportional to C because the sensitivity of the aggregation mechanism increases (see Section 3.3). We work in the honest-but-curious setting, a commonly adopted assumption in cryptography which requires the adversary to follow the protocol description correctly but will try to infer information from the protocol transcript. 3.2 CAPC PROTOCOL DESCRIPTION Our protocol introduces a novel formulation of the private aggregation of teachers, which implements two-party confidential inference and secret sharing to improve upon the work of Papernot et al. (2017) and guarantee confidentiality. Recall that the querying party Pi∗ initiates the protocol by sending an encrypted input x to all answering parties Pi, i 6= i∗. We use sk and pk to denote the secret and public keys owned by party Pi∗ . The proposed protocol consists of the following steps: 1. For each i 6= i∗, Pi (with model parametersMi as its input) and Pi∗ (with x, sk, pk as its input) run a secure two-party protocol. As the outcome, Pi obtains ŝi and Pi∗ obtains si such that si + ŝi = OneHot(arg max(ri)) where ri are the predicted logits. This step could be achieved by the following: a) Pi∗ and Pi run a secure two-party ML classification protocol such that Pi∗ learns nothing while Pi learns Encpk(ri), where ri are the predicted logits. b) Pi generates a random vector r̂i , performs the following computation on the encrypted data Encpk(ri)− Encpk(r̂i) = Encpk(ri − r̂i), and sends the encrypted difference to Pi∗ , who decrypts and obtains (ri − r̂i). c) Pi (with r̂i as input) and Pi∗ (with ri − r̂i as input) engage in Yao’s two-party garbledcircuit protocol to obtain vector si for Pi∗ and vector ŝi for Pi, such that si + ŝi = OneHot(arg max(ri)). 2. Pi sends ŝi to the PG. The PG computes ŝ = ∑ i 6=i∗ ŝi + DPNoise( ), where DPNoise() is element-wise Laplacian or Gaussian noise whose variance is calibrated to obtain a desired differential privacy guarantee ε; whereas Pi∗ computes s = ∑ i6=i∗ si. 3. The PG and Pi∗ engage in Yao’s two-party garbled-circuit protocol for computing the argmax: Pi∗ gets arg max(ŝ + s) and the PG gets nothing. Next, we elaborate on the confidentiality and privacy guarantees achieved by CaPC. 3.3 CONFIDENTIALITY AND DIFFERENTIAL PRIVACY GUARANTEES Confidentiality Analysis. We prove in Appendix E that the above protocol reveals nothing to Pi or the PG and only reveals the final noisy results to Pi∗ . The protocol is secure against a semi-honest adversary corrupting any subset of parties. Intuitively, the proof can be easily derived based on the security of the underlying components, including two-party classification protocol, secret sharing, and Yao’s garbled circuit protocol. As discussed in Section 4.1 and Appendix A.1, for secret sharing of unbounded integers, we need to make sure the random padding is picked from a domain much larger than the maximum possible value being shared. Given the above, a corrupted Pi∗ cannot learn anything aboutMi of the honest party due to the confidentiality guarantee of the secure classification protocol; similarly, the confidentiality of x against corrupted Pi is also protected. Intermediate values are all secretly shared (and only recovered within garbled circuits) so they are not visible to any party. Differential Privacy Analysis. Here, any potential privacy leakage in terms of differential privacy is incurred by the answering parties {Pi}i6=i∗ for their datasets {Di}i 6=i∗ , because these parties share the predictions of their models. Before sharing these predictions to Pi∗ , we follow the PATE protocol: we compute the histogram of label counts ŷ, then add Laplacian or Gaussian noise using a sensitivity of 1, and finally return the argmax of ŷσ to Pi∗ . Since Pi∗ only sees this noisily aggregated label, both the data-dependent and data-independent differential privacy analysis of PATE apply to Pi∗ (Papernot et al., 2017; 2018). Thus, when there are enough parties with high consensus, we can obtain a tighter bound on the privacy budget as the true plurality will more likely be returned (refer to Appendix B for more details on how this is achieved in PATE). This setup assumes that only one answering party can be corrupted. If instead C parties are corrupted, the sensitivity of the noisy aggregation mechanism will be scaled by C and the privacy guarantee will deteriorate. There is no privacy leakage to the PG; it does not receive any part of the predictions from {Pi}i 6=i∗ . 4 EXPERIMENTS CaPC aims to improve the model utility of collaborating parties by providing them with new labelled data for training their respective local models. Since we designed the CaPC protocol with techniques for confidentiality (i.e., confidential inference and secret sharing) and differential privacy (i.e., private aggregation), our experiments consider the following three major dimensions: 1. How well does collaboration improve the model utility of all participating parties? 2. What requirements are there to achieve privacy and how can these be relaxed under different circumstances? What is the trade-off between the privacy and utility provided by CaPC? 3. What is the resulting computational cost for ensuring confidentiality? 4.1 IMPLEMENTATION We use the HE-transformer library with MPC (MP2ML) by Boemer (2020) in step 1a of our protocol for confidential two-party deep learning inference. To make our protocol flexible to any private inference library, not just those that return the label predicted by the model (HE-transformer only returns logits), we incorporate steps 1b and 1c of the protocol outside of the private inference library. The EMP toolkit (Wang et al., 2016) for generic two-party computation is used to compute the operations including argmax and sum via the garbled circuits. To secret share the encrypted values, we first convert them into integers over a prime field according to the CKKS parameters, and then perform secret sharing on that domain to obtain perfect secret sharing. We use the single largest logit value for eachMi obtained on its training set Di in plain text to calculate the necessary noise. 4.2 EVALUATION SETUP Collaboration. We use the following for experiments unless otherwise noted. We uniformly sample from the training set in use2, without replacement, to create disjoint partitions, Di, of equal size and identical data distribution for each party. We select K = 50 and K = 250 as the number of parties for CIFAR10 and SVHN, respectively (the number is larger for SVHN because we have more data). We select Q = 3 querying parties, Pi∗ , and similarly divide part of the test set into Q separate private pools for each Pi∗ to select queries, until their privacy budget of is reached (using Gaussian noise with σ = 40 on SVHN and 7 on CIFAR10). We are left with 1, 000 and 16, 032 evaluation data points from the test set of CIFAR10 and SVHN, respectively. We fix = 2 and 20 for SVHN and CIFAR10, respectively (which leads to ≈ 550 queries per party), and report accuracy on the evaluation set. Querying models are retrained on their Di plus the newly labelled data; the difference in accuracies is their accuracy improvement. We use shallower variants of VGG, namely VGG-5 and VGG-7 for CIFAR10 and SVHN, respectively, to accommodate the small size of each party’s private dataset. We instantiate VGG-7 with 6 convolutional layers and one final fully-connected layer, thus there are 7 functional layers overall. Similarly, VGG-5 has 4 convolutional layers followed by a fully connected layer. The ResNet-10 architecture starts with a single convolutional layer, followed by 4 basic blocks with 2 convolutional layers in each block, and ends with a fully-connected layer, giving 10 functional layers in total. The ResNet-8 architecture that we use excludes the last basic block and increases the number of neurons in the last (fully-connected) layer. We present more details on architectures in Appendix F.2. We first train local models for all parties using their non-overlapping private datasets. Next, we run the CaPC protocol to generate query-answer pairs for each querying party. Finally, we retrain the local model of each querying party using the combination of their original private dataset and the newly obtained query-answer pairs. We report the mean accuracy and class-specific accuracy averaged over 5 runs for all retrained models, where each uses a different random seed. Heterogeneity and Data Skew. Where noted, our heterogeneous experiments (recall that this is a newly applicable setting that CaPC enables) use VGG-7, ResNet-8 and ResNet-10 architectures for K 3 parties, each. One model of each architecture is used for each of Q = 3 querying parties. Our data skew experiments use 80% less data samples for the classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and 90% less data for the classes 1 and 2 on SVHN. In turn, unfair ML algorithms perform worse on these specific classes, leading to worse balanced accuracy (see Appendix D). We adopt balanced accuracy instead of other fairness metrics because the datasets we use have no sensitive attributes, making them inapplicable. We employ margin, entropy, and greedy k-center active learning strategies 2For the SVHN dataset, we combine its original training set and extra set to get a larger training set. (described in Appendix C) to encourage ML algorithms to sample more queries from regimes that have been underrepresented and to improve their fairness performance. 4.3 COLLABORATION ANALYSIS We first investigate the benefits of collaboration for improving each party’s model performance in several different settings, namely: homogeneous and heterogeneous model architectures across querying and answering parties, and uniform and non-uniform data sampling for training data. From these experiments, we observe: increased accuracy in both homogeneous settings and heterogeneous settings to all model architectures (Section 4.3.1) and improved balanced accuracy when there is data skew between parties, i.e., non-uniform private data (Section 4.3.2). 4.3.1 UNIFORMLY SAMPLED PRIVATE DATA The first setting we consider is a uniform distribution of data amongst the parties—there is no data drift among parties. Our set up for the uniform data distribution experiments is detailed in Section 4.2. We evaluate the per-class and overall accuracy before and after CaPC in both homogeneous and heterogeneous settings on the CIFAR10 and SVHN datasets. In Figure 2, we see there is a consistent increase in accuracy for each class and overall in terms of mean accuracy across all parties on the test sets. We observe these improvements in both the homogeneous and heterogeneous settings for both datasets tested. As demonstrated in Figure 2, there is a greater climb in mean accuracy for the heterogeneous setting than the homogeneous setting on SVHN. Figures 5, 6, and 7 provide a breakdown of the benefits obtained by each querying party. We can see from these figures that all querying parties observe an increase in overall accuracy in heterogeneous and homogeneous settings with both datasets; additionally, the jump in accuracy is largely constant between different model architectures. In only 6.67% of all cases were any class-specific accuracies degraded, but they still showed a net increase in overall model accuracy. 4.3.2 NON-UNIFORMLY SAMPLED PRIVATE DATA In this section, we focus our analysis on two types of data skew between parties: varying size of data per class and total size of data provided; the setup is described in Section 4.2. To analyze data skew, we explore the balanced accuracy (which measures mean recall on a per-class basis, see Appendix D). We use balanced accuracy in order to investigate aggregate fairness gains offered by CaPC. Random sampling from non-uniform distributions leads to certain pitfalls: e.g., underrepresented classes are not specifically targeted in sampling. Thus, we additionally utilize active learning techniques, namely entropy, margin, and greedy-k-center (see Definitions 6-8 in Appendix C), and analyze balanced accuracy with each strategy. In Figure 3, we see that CaPC has a significant impact on the balanced accuracy when there is data skew between the private data of participating parties. Even random sampling can drastically improve balanced accuracy. Leveraging active learning techniques, we can achieve additional benefits in balanced accuracy. In particular, we observe that entropy and margin sampling achieves the greatest improvement over random sampling in per-class accuracy for the less represented classes ‘horse’, ‘ship’, and ‘truck’ on CIFAR10 and classes 1 and 2 on SVHN. These enhancements can be explained by the underlying mechanisms of margin and entropy sampling because the less-represented classes have a higher margin/entropy; the queries per class for each method are shown in Figure 9. Through these experiments, we show that in data skew settings, the CaPC protocol can significantly improve the fair performance of models (as measured by balanced accuracy), especially when combined with active learning techniques. Note that we see similar trends with (normal) accuracy as well. 4.4 PRIVACY VERSUS UTILITY We now study the trade-off between privacy and utility of our obtained models. Recall that we add Gaussian (or Laplacian) noise to the aggregate of predicted labels of all parties. Under the uniform setting, we choose the standard deviation σ by performing a (random) grid search and choosing the highest noise before a significant loss in accuracy is observed. In doing so, each query uses minimal ε while maximizing utility. Figure 11 in Appendix F shows a sample plot for K = 250 models. For more details on how ε is calculated, please refer to Appendix B. As we increase the number of parties, we can issue more queries for a given privacy budget (ε) which leads to a higher accuracy gain. In Figure 4, we report the accuracy gain achieved using CaPC with various numbers of parties, K. With a fixed total dataset size, increasing the number of parties decreases their training data size, leading to worse performing models. These models see the largest benefit from CaPC but, importantly, we always see a net improvement across all values of K. Number of parties 150 200 250 300 400 Accuracy gain (%) 0.62 1.45 2.39 3.07 3.87 Best ε 3.50 3.32 2.60 2.40 1.91 4.5 COMPUTATIONAL COSTS OF CONFIDENTIALITY The incorporation of confidentiality in CaPC increases computational costs. We segment the analysis of computational overhead of CaPC into three parts corresponding to sequential steps in the protocol: (1) inference, (2) secret sharing between each querying and answering party, and (3) secret sharing between the querying party and the PG. Each of these steps is analyzed in terms of the wall-clock time (in seconds). We use the default encryption setting in HE-transformer and vary the modulus range, N , which denotes the max value of a given plain text number to increase the maximum security level possible. HE-transformer only supports inference on CPUs and is used in step (1). Step (1) with neural network inference using MPC incurs the highest CPU and network costs (see Table 1 and Figure 13 in Appendix F). Even the base level of security increases computational cost by 100X, and high security levels see increases up to 1000X, in comparison to the non-encrypted inference on CPU. Compared to step (1), the rest of the CaPC protocol incurs a negligible overhead to perform secret sharing. Overall, CaPC incurs only a low additional cost over the underlying MP2ML framework, as shown in Figure 13, which enables applicability and scalability as these tools progress. 5 DISCUSSION AND CONCLUSIONS CaPC is a secure and private protocol that protects both the confidentiality of test data and the privacy of training data, which are desired in applications like healthcare and finance. Our framework facilitates collaborative learning using heterogeneous model architectures and separate private datasets, even if the number of parties involved is small. It offers notable advantages over recent methods for learning with multiple participants, such as federated learning, which assumes training of a single fixed model architecture. CaPC does not assume a homogeneous model architecture and allows parties to separately and collaboratively train different models optimized for their own purposes. Federated learning also requires a large number of parties while CaPC provides gains in accuracy with significantly fewer participants, even in contexts where each party already has a model with high accuracy. Notably, CaPC incurs low overhead on top of underlying tools used for secure neural network inference. Through our experiments, we also demonstrate that CaPC facilitates collaborative learning even when there exists non i.i.d (highly skewed) private data among parties. Our experiments show that CaPC improves on the fair performance of participating querying models as indicated by improvements in the balanced accuracy, a common fairness metric. Further, we observe a significant increase in per-class accuracy on less-represented classes on all datasets tested. Notably, CaPC is easily configured to leverage active learning techniques to achieve additional fairness improvement gains or to learn from other heterogeneous models trained with fairness techniques, e.g., with synthetic minority oversampling (Chawla et al., 2002). In future work, we look to analyzing the fairness implications of CaPC in contexts where there is discrimination over a private dataset’s sensitive attributes, not just class labels. In these cases, other fairness metrics like equalized odds and equal opportunity (see Appendix D) can be explored. We note some limitations of the proposed protocol. HE-transformer does not prevent leaking certain aspects of the model architecture, such as the type of non-linear activation functions and presence of MaxPooling layers. CaPC improves upon existing methods in terms of the necessary number of parties; however, it would be favorable to see this number decreased under 50 for better flexibility and applicability in practice. In the face of this last limitation, when there are few physical parties, we can generate a larger number of virtual parties for CaPC, where each physical party subdivides their private dataset into disjoint partitions and trains multiple local models. This would allow CaPC to tolerate more noise injected during aggregation and provide better privacy guarantees. Note that each physical party could select queries using a dedicated strong model instead of the weak models used for answering queries in CaPC. This setting is desirable in cases where separate models are required within a single physical party, for example, in a multi-national organization with per-country models. ACKNOWLEDGMENTS We would like to acknowledge our sponsors, who support our research with financial and in-kind contributions: Microsoft, Intel, CIFAR through the Canada CIFAR AI Chair and AI catalyst programs, NFRF through an Exploration grant, and NSERC COHESA Strategic Alliance. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partners. Finally, we would like to thank members of CleverHans Lab for their feedback, especially: Tejumade Afonja, Varun Chandrasekaran, Stephan Rabanser, and Jonas Guan. A MORE BACKGROUND ON CRYPTOGRAPHY A.1 CRYPTOGRAPHIC BUILDING BLOCKS Homomorphic encryption. Homomorphic encryption defines an encryption scheme such that the encryption and decryption functions are homomorphic between plaintext and ciphertext spaces. Although it is known that fully homomorphic encryption can be constructed based on lattice-based assumptions, most applications only require a weaker version with bounded number of multiplications on each ciphertext. Schemes with this constraint are much more practical, including for example, BGV (Brakerski et al., 2014), CKKS (Cheon et al., 2017), etc. Secret sharing. Secret sharing denotes a scheme in which a datum, the secret, is shared amongst a group of parties by dividing the secret into parts such that each party only has one part, or ‘share’ of the secret. The secret can only be recovered if a certain number of parties conspire to combine their shares. It is easy to construct secret sharing modulo a positive integer. If the application does not allow modular operation, one can still achieve statistically secure secret sharing by using random shares that are much larger than the secret being shared (Evans et al., 2011). Oblivious transfer. Oblivious transfer involves two parties: the sending party and the receiving party. The sending party has two pieces of information, s0 and s1, and the receiver wants to receive sb, where b ∈ {0, 1}, such that the sending party cannot learn b and the receiving party cannot learn s¬b. In general, oblivious transfer requires public-key operations, however, it is possible to execute a large number of oblivious transfers with only a very small number of public-key operations based on oblivious transfer extension (Ishai et al., 2003). Garbled circuits. In Yao’s garbled circuit protocol for two-party computation, each of the two parties assumes a role, that of garbler or that of evaluator. The function f on which to compute each of the two parties’ inputs is described as a Boolean circuit. The garbler randomly generates aliases (termed labels) representing 0 and 1 in the Boolean circuit describing f and replaces the binary values with the generated labels for each wire in the circuit. At each gate in the circuit, which can be viewed as a truth table, the garbler uses the labels of each possible combination of inputs to encrypt the corresponding outputs, and permutes the rows of the truth table. The garbler then uses the generated labels for 0 and 1 to encode their own input data and sends these labels and the garbled Boolean circuit to the evaluator. The evaluator now converts their binary input data to the corresponding labels through a 1-2 oblivious transfer protocol with the garbler. After receiving the labels for their input, the evaluator evaluates the garbled circuit by trying to decrypt each row in the permutable truth tables at each gate using the input labels; only one row will be decryptable at each gate, which is the output label for the outgoing wire from the gate. The evaluator eventually finishes evaluating the garbled circuit and obtains the label for the output of the function f computed on the garbler’s and the evaluator’s input. The garbler then must provide the true value for the output label so that both parties can get the output. A.2 PROTECTING CONFIDENTIALITY USING MPC Neural networks present a challenge to secure multi-party computation protocols due to their unique structure and exploitative combination of linear computations and non-linear activation functions. Cryptographic inference with neural networks can be considered in two party computation case in which one party has confidential input for which they wish to obtain output from a model and the other party stores the model; in many cases the party storing the model also wishes that the model remains secure. Confidential learning and inference with neural networks typically uses homomorphic encryption (HE) or secure multi-party computation (MPC) methods. Many libraries support pure HE or MPC protocols for secure inference of neural networks; a comprehensive list can be viewed in (Boemer et al., 2020). Notably, libraries such as nGraph-HE (Boemer et al., 2019) and CryptoNets (GiladBachrach et al., 2016) provide pure homomorphic encryption solutions to secure neural network inference. nGraph-HE, an extension of graph compiler nGraph, allows secure inference of DNNs through linear computations at each layer using CKKS homomorphic encryption scheme (Cheon et al., 2017; Boemer et al., 2019). CryptoNets similarly permit confidential neural network inference using another leveled homomorphic encryption scheme, YASHE’ (Gilad-Bachrach et al., 2016). On the other hand, several libraries employing primarily MPC methods in secure NN inference frameworks rely on ABY, a tool providing support for common non-polynomial activation functions in NNs through use of both Yao’s GC and GMW. In DL contexts, while pure homomorphic encryption methods maintain model security, their failure to support common non-polynomial activation functions leads to leaking of pre-activation values (feature maps at hidden layers). Tools that use solely MPC protocols avoid leaking pre-activation values as they can guarantee data confidentiality on non-polynomial activation functions but may compromise the security of the model architecture by leaking activation functions or model structure. Recent works on secure NN inference propose hybrid protocols that combine homomorphic encryption schemes, and MPC methods to build frameworks that try to reduce leakages common in pure HE and MPC protocols. Among recent works that use hybrid protocols and do not rely on trusted third parties are Gazelle (Juvekar et al., 2018), Delphi (Mishra et al., 2020), and MP2ML (Boemer et al., 2020). Gazelle, Delphi and MP2ML largely support non-polynomial activation functions encountered in convolutional neural networks, such as maximum pooling and rectified linear unit (ReLU) operations. Gazelle introduced several improvements over previous methods for secure NN inference primarily relating to latency and confidentiality. In particular, Gazelle framework provides homomorphic encryption libraries with low latency implementations of algorithms for single instruction multiple data (SIMD) operations, ciphertext permutation, and homomorphic matrix and convolutional operations, pertinent to convolutional neural networks. Gazelle utilizes kernel methods to evaluate homomorphic operations for linear components of networks, garbled circuits to compute non-linear activation functions confidentially and additive secret sharing to quickly switch between these cryptographic protocols. Delphi builds on Gazelle, optimizing computation of both linear and non-linear com- putations in CNNs by secret sharing model weights in the pre-processing stage to speed up linear computations later, and approximating certain activation functions such as ReLU with polynomials. MP2ML employs nGraph-HE for homomorphic encryption and ABY framework for evaluation of non-linear functions using garbled circuits. B MORE BACKGROUND ON DIFFERENTIAL PRIVACY One of the compelling properties of differential privacy is that it permits the analysis and control of cumulative privacy cost over multiple consecutive computations. For instance, strong composition theorem (Dwork et al., 2010) gives a tight estimate of the privacy cost associated with a sequence of adaptive mechanisms {Mi}i∈I . Theorem 1 (Strong Composition). For ε, δ, δ′ ≥ 0, the class of (ε, δ)-differentially private mechanisms satisfies (ε′, kδ + δ′)-differential privacy under k-fold adaptive composition for: ε′ = ε √ 2k log(1/δ′) + kε(eε − 1) (2) To facilitate the evaluation of privacy leakage resulted by a randomized mechanismM, it is helpful to explicitly define its corresponding privacy loss cM and privacy loss random variableCM. Particularly, the fact thatM is (ε, δ)-differentially private is equivalent to a certain tail bound on CM. Definition 2 (Privacy Loss). Given a pair of adjacent datasets d, d′ ∈ D and an auxiliary input aux, the privacy loss cM of a randomized mechanismM evaluated at an outcome o ∈ R is defined as: cM(o | aux, d, d′) , log Pr[M(aux, d) = o] Pr[M(aux, d′) = o] (3) For an outcome o ∈ R sampled fromM(d), CM(aux, d, d′) takes the value cM(o | aux, d, d′). Based on the definition of privacy loss, Abadi et al. (Abadi et al., 2016) introduced the moments accountant to track higher-order moments of privacy loss random variable and achieved even tighter privacy bounds for k-fold adaptive mechanisms. Definition 3 (Moments Accountant). Given any adjacent datasets d, d′ ∈ D and any auxiliary input aux, the moments accountant of a randomized mechanismM is defined as: αM(λ) , max aux,d,d′ αM(λ | aux, d, d′) (4) where αM(λ | aux, d, d′) , logE[exp(λCM(aux, d, d′))] is obtained by taking the logarithm of the privacy loss random variable. As a natural relaxation to the conventional (ε, δ)-differential privacy, Rényi differential privacy (RDP) (Mironov, 2017) provides a more convenient and accurate approach to estimating privacy loss under heterogeneous composition. Definition 4 (Rényi Divergence). For two probability distributions P and Q defined over R, the Rényi divergence of order λ > 1 between them is defined as: Dλ(P ||Q) , 1 λ− 1 logEx∼Q [ (P (x)/Q(x))λ ] = 1 λ− 1 logEx∼P [ (P (x)/Q(x))λ−1 ] (5) Definition 5 (Rényi Differential Privacy). A randomized mechanismM is said to satisfy ε-Rényi differential privacy of order λ, or (λ, ε)-RDP for short, if for any adjacent datasets d, d′ ∈ D: Dλ(M(d) ||M(d′)) = 1 λ− 1 logEx∼M(d) [( Pr[M(d) = x] Pr[M(d′) = x] )λ−1] ≤ ε (6) Theorem 2 (From RDP to DP). If a randomized mechanismM guarantees (λ, ε)-RDP, then it also satisfies (ε+ log(1/δ)λ−1 , δ)-differential privacy for any δ ∈ (0, 1). Building upon the moments accountant and RDP techniques, Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017) provides a flexible approach to training machine learning models with strong privacy guarantees. Precisely, rather than directly learning from labeled private data, the model that gets released instead learns from unlabeled public data by querying a teacher ensemble for predicted labels. Models in the ensemble are themselves trained on disjoint partitions of the private dataset, while privacy guarantees are enabled by applying the Laplace mechanism to the ensemble’s aggregated label counts. Coupled with data-dependent privacy analysis, PATE achieves a tighter estimate of the privacy loss associated with label queries, especially when the consensus among teacher models is strong. Given this motivation, the follow-up work of PATE (Papernot et al., 2018) further improves the privacy bound both by leveraging a more concentrated noise distribution to strengthen consensus and by rejecting queries that lack consensus. C MORE BACKGROUND ON ACTIVE LEARNING Active learning, sometimes referred to as query learning, exploits the intuition that machine learning algorithms will be able to learn more efficiently if they can actively select the data from which they learn. For certain supervised learning tasks, this insight is of particularly important implications, as labeled data rarely exists in abundance and data labeling can be very demanding (Settles, 2009). In order to pick queries that will most likely contribute to model learning, various pool sampling methods have been proposed to estimate the informativeness of unlabeled samples. Uncertainty-based approaches (Lewis & Gale, 1994), such as margin sampling and entropy sampling, typically achieve a satisfactory trade-off between sample utility and computational efficiency. We also explore a core-set approach to active learning using greedy-k-center sampling (Sener & Savarese, 2017). Definition 6 (Margin Sampling (Scheffer et al., 2001)). Given an unlabeled dataset d and a classification model with conditional label distribution Pθ(y |x), margin sampling outputs the most informative sample: x∗ = arg min x∈d Pθ(ŷ1 |x)− Pθ(ŷ2 |x) (7) where ŷ1 and ŷ2 stand for the most and second most probable labels for x, according to the model. Definition 7 (Entropy Sampling). Using the setting and notations in Definition 6, margin sampling can be generalized by using entropy (Shannon, 1948) as an uncertainty measure as follows: x∗ = arg max x∈d − ∑ i Pθ(yi |x) logPθ(yi |x) (8) where yi ranges over all possible labels. Definition 8 (Greedy-K-center Sampling). We aim to solve the k-center problem defined by Farahani & Hekmatfar (2009), which is, intuitively, the problem of picking k center points that minimize the largest distance between a data point and its nearest center. Formally, this goal is defined as min S:|S∪D|≤k max i min j∈S∪D ∆(xi,xj) (9) where D is the current training set and S is our new chosen center points. This definition can can be solved greedily as shown in (Sener & Savarese, 2017). D MORE BACKGROUND ON FAIRNESS Due to the imbalance in sample quantity and learning complexity, machine learning models may have disparate predictive performance over different classes or demographic groups, resulting in unfair treatment of certain population. To better capture this phenomenon and introduce tractable countermeasures, various fairness-related criteria have been proposed, including balanced accuracy, demographic parity, equalized odds (Hardt et al., 2016), etc. Definition 9 (Balanced Accuracy). Balanced accuracy captures model utility in terms of both accuracy and fairness. It is defined as the average of recall scores obtained on all classes. Among the criteria that aim to alleviate discrimination against certain protected attributes, equalized odds and equal opportunity Hardt et al. (2016) are of particular research interests. Definition 10 (Equalized Odds). A machine learning model is said to guarantee equalized odds with respect to protected attribute A and ground truth label Y if its prediction Ŷ and A are conditionally independent given Y . In the case of binary random variables A, Y, Ŷ , this is equivalent to: Pr [ Ŷ = 1 |A = 0, Y = y ] = Pr [ Ŷ = 1 |A = 1, Y = y ] , y ∈ {0, 1} (10) To put it another way, equalized odds requires the model to have equal true positive rates and equal false positive rates across the two demographic groups A = 0 and A = 1. Definition 11 (Equal Opportunity). Equal opportunity is a relaxation of equalized odds that requires non-discrimination only within a specific outcome group, often referred to as the advantaged group. Using previous notations, the binary case with advantaged group Y = 1 is equivalent to: Pr [ Ŷ = 1 |A = 0, Y = 1 ] = Pr [ Ŷ = 1 |A = 1, Y = 1 ] (11) E PROOF OF CONFIDENTIALITY Here we prove that our protocol described in the main body does not reveal anything except the final noised result to Pi∗ . In can be proven in the standard real-world ideal-world paradigm, where the ideal functionality takes inputs from all parties and sends the final results to Pi∗ . We use A to denote the set of corrupted parties. Below, we describe the simulator (namely S). The simulator strategy depends on if i∗ is corrupted. If i∗ ∈ A, our simulator works as below: 1.a) The simulator simulates what honest parties would do. 1.b) For each i /∈ A, S sends fresh encryption of a random ri to Pi∗ . 1.c) For each i /∈ A, S sends random si to Pi∗ on be half of the 2PC functionality between Pi and Pi∗ . 2-3 S sends the output of the whole computation to Pi∗ on behalf of the 2PC functionality between PG and Pi∗ If i∗ /∈ A, our simulator works as below: 1.a) If i∗ /∈ A, for each i ∈ A, S computes a fresh encryption of zero and sends it to Pi on behalf of Pi∗ . 1.b) The simulator simulates what honest parties would do. 1.c) For each i ∈ A, S sends random ŝi to Pi on behalf of the 2PC functionality between Pi and Pi∗ . 2-3 The simulator simulates what honest parties would do. Assuming that the underlying encryption scheme is CPA secure and that 2PC protocols used in step 1, 2 and 3 are secure with respect to standard definitions (i.e., reveals nothing beyond the outputs), our simulation itself is perfect. F DETAILS ON EXPERIMENTAL SETUP F.1 MNIST AND FASHION-MNIST We use the same setup as for CIFAR10 and SVHN datasets with the following adjustments. We select K = 250 as the default number of parties. For the imbalanced classes we select classes 1 and 2 for MNIST as well as Trouser and Pullover for Fashion-MNIST. We use the Gaussian noise with σ = 40 (similarly to SVHN). We are left with 1, 000 evaluation data points from the test set (similarly to CIFAR10). We fix the default value of = 2.35 for MNIST and = 3.89 for Fashion-MNIST. We use a variant of the LeNet architecture. F.2 DETAILS ON ARCHITECTURES To train the private models on subsets of datasets, we downsize the standard architectures, such as VGG-16 or ResNet-18. Below is the detailed list of layers in each of the architectures used (generated using torchsummary). The diagram for ResNet-10 also includes skip connections and convolutional layers for adjusting the sizes of feature maps. VGG-7 for SVHN: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 ReLU-3 [-1, 64, 32, 32] 0 MaxPool2d-4 [-1, 64, 16, 16] 0 Conv2d-5 [-1, 128, 16, 16] 73,728 BatchNorm2d-6 [-1, 128, 16, 16] 256 ReLU-7 [-1, 128, 16, 16] 0 MaxPool2d-8 [-1, 128, 8, 8] 0 Conv2d-9 [-1, 256, 8, 8] 294,912 BatchNorm2d-10 [-1, 256, 8, 8] 512 ReLU-11 [-1, 256, 8, 8] 0 Conv2d-12 [-1, 256, 8, 8] 589,824 BatchNorm2d-13 [-1, 256, 8, 8] 512 ReLU-14 [-1, 256, 8, 8] 0 MaxPool2d-15 [-1, 256, 4, 4] 0 Conv2d-16 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-17 [-1, 512, 4, 4] 1,024 ReLU-18 [-1, 512, 4, 4] 0 Conv2d-19 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-20 [-1, 512, 4, 4] 1,024 ReLU-21 [-1, 512, 4, 4] 0 Linear-22 [-1, 10] 5,130 ================================================================ Total params: 4,507,722 Params size MB: 17.20 ---------------------------------------------------------------- ResNet-10: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 32, 32] 1,728 BatchNorm2d-2 [-1, 64, 32, 32] 128 Conv2d-3 [-1, 64, 32, 32] 36,864 BatchNorm2d-4 [-1, 64, 32, 32] 128 Conv2d-5 [-1, 64, 32, 32] 36,864 BatchNorm2d-6 [-1, 64, 32, 32] 128 BasicBlock-7 [-1, 64, 32, 32] 0 Conv2d-8 [-1, 128, 16, 16] 73,728 BatchNorm2d-9 [-1, 128, 16, 16] 256 Conv2d-10 [-1, 128, 16, 16] 147,456 BatchNorm2d-11 [-1, 128, 16, 16] 256 Conv2d-12 [-1, 128, 16, 16] 8,192 BatchNorm2d-13 [-1, 128, 16, 16] 256 BasicBlock-14 [-1, 128, 16, 16] 0 Conv2d-15 [-1, 256, 8, 8] 294,912 BatchNorm2d-16 [-1, 256, 8, 8] 512 Conv2d-17 [-1, 256, 8, 8] 589,824 BatchNorm2d-18 [-1, 256, 8, 8] 512 Conv2d-19 [-1, 256, 8, 8] 32,768 BatchNorm2d-20 [-1, 256, 8, 8] 512 BasicBlock-21 [-1, 256, 8, 8] 0 Conv2d-22 [-1, 512, 4, 4] 1,179,648 BatchNorm2d-23 [-1, 512, 4, 4] 1,024 Conv2d-24 [-1, 512, 4, 4] 2,359,296 BatchNorm2d-25 [-1, 512, 4, 4] 1,024 Conv2d-26 [-1, 512, 4, 4] 131,072 BatchNorm2d-27 [-1, 512, 4, 4] 1,024 BasicBlock-28 [-1, 512, 4, 4] 0 Linear-29 [-1, 10] 5,130 ================================================================ Total params: 4,903,242 Params size MB: 18.70 ---------------------------------------------------------------- LeNet style architecture for MNIST: ---------------------------------------------------------------- Layer type Output Shape Param # ================================================================ Conv2d-1 [-1, 20, 24, 24] 520 MaxPool2d-2 Conv2d-3 [-1, 50, 8, 8] 25,050 MaxPool2d-4 Linear-5 [-1, 500] 400,500 ReLU-6 Linear-7 [-1, 10] 5,010 ================================================================ Total params: 431,080 Trainable params: 431,080 Non-trainable params: 0 ---------------------------------------------------------------- Input size MB: 0.00 Forward/backward pass size MB: 0.12 Params size MB: 1.64 Estimated Total Size MB: 1.76 ---------------------------------------------------------------- G ADDITIONAL EXPERIMENTS AND FIGURES Number of parties 150 200 250 300 400 Accuracy gain (%) 4.11 3.33 4.50 4.69 8.39 Best ε 4.50 2.50 2.35 2.00 1.63
1. What is the focus of the paper regarding collaborative learning? 2. What are the strengths of the proposed approach, particularly in terms of privacy and security? 3. What are the weaknesses of the paper, especially regarding the evaluation and experiment section? 4. How does the reviewer assess the novelty and significance of the combined technique? 5. Are there any concerns regarding the fairness guarantees of the proposed method?
Review
Review This paper works on the problem of collaborative learning while preserving both confidentiality and privacy of the data points. It combines techniques from secure multi-party computation and differential privacy for the same, and improves on confidential inference and PATE in the process. The new technique is called CaPC. Finally, it states empirical results as evidence for the improved accuracy. Weakness: The evaluation is done on just two datasets. So, it is a little hard to judge whether the techniques would generalise or not. The writing of the paper itself is not that great because it is difficult to understand the low level details of the experiments. They talk very little about improving on the fairness guarantees. Strengths: Their techniques enable collaborative learning even in settings where the local architectures of different parties are different. The algorithms they provide improve on fairness. Their empirical results are better than the previously known methods. Evaluation: I believe the combination of secure multi-party computation and differential privacy is not totally new, but since it yields decent results, I would say that the paper deserves a chance to be accepted.
ICLR
Title DiffusER: Diffusion via Edit-based Reconstruction Abstract In text generation, models that generate text from scratch one token at a time are currently the dominant paradigm. Despite being performant, these models lack the ability to revise existing text, which limits their usability in many practical scenarios. We look to address this, with DIFFUSER (Diffusion via Edit-based Reconstruction), a new edit-based generative model for text based on denoising diffusion models – a class of models that use a Markov chain of denoising steps to incrementally generate data. DIFFUSER is not only a strong generative model in general, rivalling autoregressive models on several tasks spanning machine translation, summarization, and style transfer; it can also perform other varieties of generation that standard autoregressive models are not well-suited for. For instance, we demonstrate that DIFFUSER makes it possible for a user to condition generation on a prototype, or an incomplete sequence, and continue revising based on previous edit steps. 1 INTRODUCTION Revision and editing are central to how humans produce content; we write and revise emails and papers, gradually produce works of art, and iterate on plans for a project. Despite this, the most dominant paradigm in text generation is purely autoregressive, producing text left-to-right in a single pass (Bengio et al., 2003). Although models employing this single-pass form of generation are highly performant, they are limited by the inability to refine existing text. To address this, we propose DIFFUSER: Diffusion via Edit-based Reconstruction, a flexible method to apply edit-based generative processes to arbitrary text generation tasks. Specifically, we take inspiration from diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), generative models that generate by way of incremental denoising steps, and adapt this approach to the text generation paradigm with a formulation similar to natural editing processes. Prior work on text generation either focuses on improving the performance of standard autoregressive (AR) models through larger models and datasets (Vaswani et al., 2017; Sutskever et al., 2014; Radford et al.; Brown et al., 2020) or on proposing new, non-autoregressive approaches (Gu et al., 2017; Ghazvininejad et al., 2019; Gu et al., 2019) to improve general modes of text generation. A thus far separate line of models has taken the perspective of modeling text edits for specific tasks: e.g. style transfer (Reid & Zhong, 2021; Malmi et al., 2020), sentence fusion (Malmi et al., 2019), and grammatical error correction (Dale & Kilgarriff, 2011). DIFFUSER unifies these two perspectives by enabling edit processes to be applied to general purpose text generation without compromising performance or requiring external supervised data (Guu et al., 2018). This design enables it ∗Work done partially while at the University of Tokyo to both generate and edit text, including externally produced content, a natural extension of the text generation paradigm. DIFFUSER models text generation as a series of diffusion steps at the token level. This form of generation allows us to develop a synthetic formulation of natural editing processes (Reid & Neubig, 2022) using edit-based corruption and reconstruction. Our method starts from an arbitrary sequence (either a prototype generation, randomly sampled tokens, or a null sequence) and progressively edits it into the final sequence guided by the Levenshtein edit operations of INSERT, DELETE, KEEP, and REPLACE as shown in Figure 1. This enables flexible editing in a range of contexts, including machine translation, summarization, style transfer, while also allowing for the possibility of taking outside input to guide and constrain generation. Learning these edit-based diffusion processes required several innovations over standard autoregressive and MLM-style iterative generation approaches (Ghazvininejad et al., 2019; Austin et al., 2021; Savinov et al., 2022), including forming edit-based corruption and reconstruction processes for training (Sec 3), as well as techniques to improve the quality of decoding sequences across both timesteps and token-level generations (including 2D beam search; Sec 3.6, Sec 3.5). To demonstrate the effectiveness of DIFFUSER, we test our method on three text generation tasks: machine translation, abstractive summarization, and text style transfer, and show on-par or improved performance compared to purely autoregressive, single-pass and non-autoregressive methods. We also provide qualitative samples of the edit processes learned by the models in different settings and analyses on training and inference speeds, as well as the relationship between edit steps and performance. Overall, we demonstrate the potential of edit-based generative models to offer 1) more performant generation, 2) greater interactivity between different models (as we can now perform edits in the discrete space on model generated output), and 3) more flexible/controllable generation. 2 BACKGROUND DIFFUSER operates at the intersection of text generation, editing processes, and diffusion models. We first provide the background and intuition of these three techniques. 2.1 TEXT GENERATION Most text generation models used in NLP today are autoregressive in nature. In this paradigm, given a sequence s = [s0, s1, . . . , sN ], one can model the likelihood of the entire sequence P (s) by modeling the probability of predicting each token in an autoregressive, often left-to-right, manner. This formulation, where the likelihood of a token p(st) is conditioned on its predecessors s<t, is shown below (Bengio et al., 2003): P (s) = N∏ i=0 p(st|st−1, st−2, . . . , s0) (1) Models trained with this objective can then be sampled from, or searched over (e.g. using beam search), to provide generations in downstream tasks such as machine translation or summarization. Non-autoregressive models (Gu et al., 2017) are a different variety of generative models, in which a sequence is generated in a single pass (removing the autoregressive conditioning on previously generated tokens) with multiple revision-level passes, often in the name of efficiency. 2.2 EDITING PROCESSES Editing processes (Reid & Neubig, 2022) are a paradigm for modeling text by way of incremental revisions, taking inspiration from the the way humans generate text. Specifically, let X = {x0,x1, . . . ,xR} be a series of R versions of a document, where x0,xi,xR represents the initial, intermediate (at timestep t), and final/current state of a document, respectively. Using editing processes, we can model the probability of this series of documents versions occurring consecutively as follows: p(X) = R∏ i=0 p(xi|xi−10 ) (2) With this formulation, editing processes can also be used to calculate the probability of only the final document while taking into account previous revisions, which is not possible in the traditional text generation setup as intermediate revisions are not explicitly known, using the equation below (Reid & Neubig, 2022). p(xR) = ∑ X̃∈{x̃R0 |x̃R=xR} p(X̃). (3) 2.3 DIFFUSION MODELS We now make the connection between editing processes and diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020). Continuous diffusion processes are commonly applied in computer vision tasks to iteratively convert a sample of noise into an image. This can be seen as an edit process in which the model iteratively edits a noisy image to bring it closer to a final, complete image. These continuous diffusion models are often trained by modeling a Markov chain xT . . .xt . . .x0, where x0 represents the original image and xT represents Gaussian noise. This chain is typically produced by incrementally adding Gaussian noise to xt to form xt+1 (known as the forward or corruption process), wherein a model parameterised by pθ is trained to reverse (or “denoise”) this process to form the chain ∑T i=1 pθ(xt−1|xt). Analogized to text, this allows us to formulate natural edit processes as a discrete diffusion process in which a null string or a prototype is iteratively edited into free form text. Our DIFFUSER method (Figure 1) takes inspiration from this process, but parameterises the corruption process by way of sampled discrete edit operations applied over a discrete sequence of tokens. The success of our method supports the findings in the vision domain Bansal et al. (2022), where it is found that diffusion models can learn to invert arbirtary transformations. Previous work in diffusion models has largely focused on computer vision (Ho et al., 2020; Austin et al., 2021), in which the diffusion process is applied to raw image values. Within the context of natural language, both discrete diffusion models using only replacement operations (either applied to random tokens or masked tokens) (Savinov et al., 2022; Austin et al., 2021), and continuous diffusion over word embeddings (Li et al., 2022) have been proposed. Our model is a more flexible approach, using all four edit operations, towards diffusion models when compared with this work owing to its edit process formulation, and is also more compatible with current models (e.g. AR bootstrapping). 3 DIFFUSER DIFFUSER, being a diffusion-based method, has two main procedures: corruption and denoising. Unlike previous work (Ghazvininejad et al., 2019; Savinov et al., 2022; Gu et al., 2019) in which this procedure is relatively inflexible (e.g., due to length restrictions and/or using continuous representations for the basis of the diffusion process), both our corruption process and denoising process are based on Levenshtein operations, allowing our model to learn to take advantage of the flexibility of text editing when generating. 3.1 EDIT OPERATIONS Given the central role of the Levenshtein edit operations in our models, we provide a brief overview of each operation and its role in the editing process. We use Figure 1 as a guide when explaining each operation. INSERT: The insertion operation is used to add new text to a sequence. For example in Figure 1, “uses editing processes” is added by DiffusER at timestep xT−2. DELETE: The deletion operation erases existing text. In Figure 1, this is shown when “These” gets deleted at timestep xT−2 → xT−3. REPLACE: The replacement operation works overwriting existing text with new text. This is shown in Figure 1 at step xT → xT−1 where “filter Toronto guilty trough feel” is replaced by “These model guilty named DiffusER”. KEEP: The keep operation ensures that a portion of the text remains unchanged into the next iteration. This is illustrated in timestep xT−2 → xT−3 where “model named DiffusER” is kept. 3.2 EDIT-BASED CORRUPTION The four Levenshtein edit operations described above allow us to transform any arbitrary sequence of tokens into another. This is in contrast to iterative mask replacement, which can only introduce new tokens (Ghazvininejad et al., 2019; Austin et al., 2021; Savinov et al., 2022). For every timestep i, corruption process q(xi|xi−1; Et, El) is parameterized by two distributions: the distribution over edit types Et (e.g. 60% keep, 20% replace, 10% delete, 10% insert), and the distribution over edit length El. The latter can be parameterized by any distribution over non-negative integers, such as a uniform distribution or a Poisson distribution. For instance, to learn a deletion operation in the reconstruction process, we insert randomly sampled distractor tokens, whereas, to learn an insertion operation we delete a subset of tokens contained in the sequence. 3.3 EDIT-BASED RECONSTRUCTION Our generative process is trained via the Edit-based Reconstruction (ER) process. ER can be thought of as the opposite of our corruption process, in which we need to find the appropriate edit operations to transform xT to x0, by way of xT−1, . . . ,x1. That is, given a corrupted sequence xT , we aim to learn the process by which we can reverse the corruption in the following form. Pθ(x0) = T∏ t=0 pθ(xt−1|xt) (4) Given that, we model the likelihood of each timestep xt, this can also be referred to as an edit process (Reid & Neubig, 2022). As we include an edit process in our model and use Levenshtein tags for editing, one can think of ER as two distinct steps: identify which edits should take place (tagging process) and deciding which tokens should go in these positions (generative process). This decomposition is shown here: pθ(xt−1|xt) = ptagθ (et|xt)p gen θ (xt−1|xt,et) (5) where ptagθ parameterises the tagging model to estimate the likelihood of producing a given set of Levenshtein edit operations {INSERT ,DELETE ,KEEP ,REPLACE } given xt, and pgenθ parametersies the generator model given sequence xt and edit operations et. This decomposition via editoperations allows the generation process to be more controllable and more flexible as it allows up to explicitly specify edit types associated with tokens to be edited, rather than leaving both processes to be implicit. 3.4 IMPLEMENTING DIFFUSER WITH TRANSFORMERS When implemented with Transformers (Vaswani et al., 2017), DIFFUSER consists of two components: a tagger and generator. The tagger, a transformer network, is trained using cross-entropy loss over the ground-truth tag types to predict the edit operations that should be applied to the sequence, in preparation for the next generation step. Then, in the generation step, after removing tokens selected for deletion, we sum a learned embedding to insert and replace types and generate the inserted and replaced sequences autoregressively. Following this, we feed the output of this diffusion step into the tagger and perform another diffusion step. One step of this process can be compared to the reconstruction process used in Aghajanyan et al. (2022). 3.5 DECODING METHODS DIFFUSER has an inherently different generation process from a standard autoregressive language generation model—in addition to operating on a sequence/token level (in which generation is composed of generating individual tokens in a single-revision; intra-revision), we also operate on a revision level (in which the text is expanded across diffusion steps, inter-revision). This allows us to experiment with different methods for decoding on both the intra-revision (single sequence level) and inter-revision levels (multiple version level), which we explain below. Beam Search One method for decoding is to perform beam search over b hypotheses at every step on the output of our autoregressive generator (intra-revision level), while performing greedy decoding at the inter-revision level. Although being conceptually straightforward, this method has the limitation of not searching over the inter-revision space (despite revisions being a key component of our approach). 2D Beam Search We propose 2D beam search, in which we extend beam search as it is applied to token-level autoregressive generative models, and perform beam search using both an intra-revision width of b and an inter-revision beam width of r. This allows us to perform search on the interrevision level, which we find results in better downstream performance, but increases the beam count to r × b beams. Assuming a fixed sequence length and maximum number of diffusion steps, we would decode as follows: We first use beam search with width b at the token level and take the r most likely candidates (measured with log-likelihood). These r candidates are then fed to the next step of the diffusion model, wherein for each of r hypotheses the next diffusion step is performed with the token-level generator decoding with beam width of b. This leads us to have r× b candidate hypotheses, of which we take the top r. This process repeats for each diffusion step thereafter. Nucleus Sampling To improve the diversity of generations, we also consider a nucleus sampling based approach, where at every timestep xt, we use nucleus sampling (Holtzman et al., 2019) with p = 0.6 to sample each token autoregressively at the intra-revision level, and greedily decode at the inter-revision level (i.e. no search or sampling is performed over multiple diffusion steps). 3.6 DECODER INITIALIZATION TECHNIQUES Since our model is based on edit processes, it offers flexibility in terms of the discrete sequence from which to initialize the text generation. Previous work on non-autoregressive translation often starts with [MASK] tokens (Ghazvininejad et al., 2019), a null string (Gu et al., 2019) or random tokens (Savinov et al., 2022). We include the latter two methods in our experiments, in addition to (1) experimenting with an AR Bootstrap, in which we learn to bootstrap from text generated by a purely autoregressive model, and (2) proposing to use the source-side text as an initial state for the DIFFUSER decoder. Null Sequence In this setting, we simply initialize DIFFUSER with a null string, in which the first edit is constrained to be insertion. Random Tokens In this setting, we initialize DIFFUSER with a series of random tokens, following (Savinov et al., 2022). The model then learns to edit this random sequence. AR Bootstrap We bootstrap the reverse diffusion process by taking the output of DIFFUSER constrained to generate autoregressively (essentially mimicking a standard autoregressive generator). We then use DIFFUSER to further edit the output of this operation. Source Bootstrap In a sequence-to-sequence setting, we can also generate by bootstrapping using the source text, by setting xT to be equivalent to s. As we show in later sections, this is particularly useful in tasks such as summarization in which the output can be easily formulated as an editing version of the input. 4 EXPERIMENTS 4.1 MODELS DIFFUSER We instantiate DIFFUSER with two separate Transformer models for the tagger and generator. We use the Transformer-base encoder-decoder (Vaswani et al., 2017) architecture, with 6 layers, for the a hidden dimension of 512, feedforward dimension of 2048, 8 attention heads, and dropout p = 0.3. Baselines (MT & Summ) We use several Transformer baselines from previous literature for our various tasks. We include a conventional 6-layer encoder-decoder Transformer model from Vaswani et al. (2017), as well as models proposed in related work from the non-autoregressive generation literature: Levensthein Transformer (Gu et al., 2019), CMLM (Ghazvininejad et al., 2019), DisCo (Kasai et al., 2020a), Imputer (Saharia et al., 2020), and SUNDAE (Savinov et al., 2022). 4.2 TASKS Machine Translation We use the WMT’14 English-German dataset for our machine translation experiments. We use the same preprocessing and post-processing steps as Ghazvininejad et al. (2019). Unlike the standard in non-autoregressive translation work (Zhou et al., 2019), we focus on using the gold machine translation data instead of distilled data. We use a Poisson distribution El(λ = 3) over edit operation lengths in our corruption process. Note that we compute the edit operations over words rather than tokens. For this task, as well as the following ones, we use 12 diffusion steps, b = 5, and r = 3 for beam search, and Et(60% KEEP, 20% REPLACE, 10% INSERT, 10% DELETE) based on numbers from preliminary experiments. Summarization We also benchmark on the CNN/DailyMail dataset for summarization (Nallapati et al., 2016). Summarization is different in nature from machine translation in that it can be described as more conducive to edits as a good summary tends to preserve many parts of the input. We use the same post-processing steps as See et al. (2017). We use a Poisson distribution El(λ = 8) over edit operation lengths in our corruption process (to roughly model sentence boundaries). Text Style Transfer We perform experiments using the Yelp (Shen et al., 2017) dataset for the unsupervised text-style transfer task. We compare against methods such as Tag-and-Generate (Madaan et al., 2020), Masker (Malmi et al., 2020), and LEWIS (Reid & Zhong, 2021). In contrast with machine translation and summarization, text style transfer datasets are often unaligned (i.e. without source-target pairs) leading to the prominence of unsupervised text style transfer methods. We propose a method of performing unsupervised text style transfer using DIFFUSER, following the synthetic generation method in Reid & Zhong (2021). We train two separate, style-specific (e.g. positive and negative) DIFFUSER models on the style-specific data. We then perform transfer at test time, feeding text from each style into the model trained to edit in the opposite style (e.g. positive text → negative DIFFUSER model; negative text → positive DIFFUSER model). Following standard practice, we measure performance with BLEU, Self-BLEU and Accuracy (based on a classifier trained to disambiguate between different styles of text; we use the classifier from Reid & Zhong (2021)). 4.3 RESULTS Main Results We summarize our main results on both machine translation and summarization in Table 1. As can be seen, for both machine translation and summarization tasks, DIFFUSER, using 12 diffusion steps, outperforms all non-autoregressive baselines1 and rivals or outperforms the fully autoregressive model. Particularly interesting is how the various methods of initializing our model (i.e. AR Bootstrap and Source Bootstrap) can further improve performance well beyond the autoregressive baseline, depending on the task. We can see that for summarization, bootstrapping from the source input is more effective than bootstrapping from an abstractive autoregressive model. However, for both tasks, unlike many non-autoregressive methods, we show that DIFFUSER is complementary with token-level autoregressive methods and can be used naturally in conjunction with them. Style Transfer Results We also perform unsupervised text style transfer using our DIFFUSER models using the Yelp (Shen et al., 2017) dataset. The results can be seen in Table 2. We show that even without task-specific techniques (such as synthetic data generation and classifier based stylespecific token identification), we still have competitive performance with state of the art methods. 4.4 ANALYSIS 1We were not able to reproduce the published results of the Levenshtein Transformer using their code, hence our reported BLEU score of 23.7 is slightly lower than that of 25.2 reported in Gu et al. (2019) We perform additional analyses on DIFFUSER, specifically focusing on the decoding method, the number of iterations versus the final BLEU score, and also a qualitative analysis of how text changes at every step. Decoding Method Ablation We perform an ablation of the decoding method, using DIFFUSER for 12 steps (as used in our main results) and showing results when comparing greedy decoding, (1D) beam search, nucleus decoding, and 2D beam search. We show that 2D-beam search tends to perform the best, likely because it searches over multiple diffusion steps, while other methods (greedy, beam, nucleus) are still competitive. Number of Edit Steps versus Performance We perform an analysis where we compare the number of timesteps in our denoising diffusion process and the final BLEU score on WMT’14 En-De when using 2D-Beam Search and random token initialization in Figure 4. Here it can be seen that most performance gains are in the initial diffusion timesteps (0-10), with diminishing gains (for machine translation) or gradual losses (for summarization) between 10 and 30, after which performance marginally decreases towards 60 steps. How does text change every step? We include a qualitative sample from our DIFFUSER summarization model (Table 4). We find that DIFFUSER learns edit processes intuitive to the task at hand: namely largely deleting portions and making minor edits to the remaining text (similar to how a human may perform summarization given a news article). Time comparsion between decoding methods We also measure the impact of the various decoding algorithms we used with results shown in Figure 3. Beam search and 2D-Beam Search performs significantly slower than greedy and nucleus sampling, demonstrating the potential for improved decoding algorithms tailored for improving the trade-off between efficiency and accuracy in diffusion models. 5 RELATED WORK Non-Autoregressive Generation Work in machine translation has explored non/semiautoregressive generation (Gu et al., 2017; Lee et al., 2018), which often includes an iterative refinement step (Lee et al., 2018; Ghazvininejad et al., 2019; Kasai et al., 2020a; Gu et al., 2019). Previous methods in this space are often highly specialized underperform non-autoregressive methods due to the constraints imposed on generation for efficiency. This being said, Kasai et al. (2020b) demonstrated that non-autoregressive models are actually comparable in speed when using a larger batch size instead of 1. Our method allows us to hone in on the notion of iterative refinement by way of editing processes, and is also relatively general, allowing us to combine DIFFUSER with standard autoregressive models. Learning Properties of Edits Previous work has also looked at studying or exploiting the properties of edits. This was initially worked on in the context of vector representation learning of edits (Yin et al., 2019; Marrese-Taylor et al., 2021). Concurrently, a line of work has used edits for specific tasks such as sentence fusion, style transfer and grammatical error correction (Malmi et al., 2019; 2020; Reid & Zhong, 2021; Omelianchuk et al., 2020). Recent work has proposed editing processes (Reid & Neubig, 2022), in which document generation is looked at through the lens of its revision history, rather than just at a token level. We take inspiration from this work and devise a process by which arbitrary text generation tasks can be fitted into this framework. 6 CONCLUSIONS We proposed DIFFUSER, an diffusion-based generative model for text using edits. DIFFUSER shows improvements across the tasks considered (machine translation, summarization, style transfer), with improved generative flexibility via incremental text improvement, and compatibility with standard autoregressive models. We hope that DIFFUSER with spur research on edit-based generative models, with further potentials including how we can leverage edits to ensemble models (regardless of parameter count) in the discrete space. ACKNOWLEDGEMENTS We thank Armen Aghajanyan, Daniel Fried, Edison Marrese-Taylor, Eric Wallace, and Luke Zettlemoyer for their helpful comments in early discussions. We thank Ari Holtzman, Jungo Kasai, Aman Madaan, and Eric Wallace for feedback and proofreading the draft of this paper.
1. What is the focus of the paper regarding text generation? 2. What are the strengths of the proposed approach, particularly in its design and training objective? 3. What are the weaknesses of the paper, especially in terms of the training objective and evaluation methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies an edit-based generative text model that starts with a complete noise distribution (random gibberish) as input and then produces a series of edits to reach high-quality output. Inspired by diffusion models in CV, this Diffuser model rivals or outperforms standard autoregressive models on various generation tasks (MT, summarization), while also providing additional editing-based functionality. Strengths And Weaknesses Strengths: The objective and model design is quite novel and refreshing. I appreciate the careful design of the architecture---rather than focusing on a pure end-to-end system is decomposes the task into edit tagging and generation. The training objective seems sensible, and the overall evaluation is pretty solid. I specifically appreciate using the model as a sort of "general-purpose post-editing" model (as shown in Diffuser + AR bootstrap). It would be nice to consider more post-editing-like baselines, e.g., just adding some off-the-shelf post editors or grammatical-error correction models to the MT output. Weaknesses: While sensible, the training objective is a bit limited in that it denoises pure random tokens. Concretely, to get good performance at test time, the model needs to denoise generations that have various errors spanning semantic errors, syntactic inconsistencies, etc. These types of errors are quite far from swapping in random tokens in the input. The editing-based evaluation is a bit limited. It would have been great to explore more some of the capabilities/failures of the model. For example, conditioning on various keywords on the target side, conditioning on a target syntactic style, or similar evaluations. Clarity, Quality, Novelty And Reproducibility The paper is very clearly written. The work is quite original. The overall experiment evaluations are quite high quality and comprehensive.
ICLR
Title DiffusER: Diffusion via Edit-based Reconstruction Abstract In text generation, models that generate text from scratch one token at a time are currently the dominant paradigm. Despite being performant, these models lack the ability to revise existing text, which limits their usability in many practical scenarios. We look to address this, with DIFFUSER (Diffusion via Edit-based Reconstruction), a new edit-based generative model for text based on denoising diffusion models – a class of models that use a Markov chain of denoising steps to incrementally generate data. DIFFUSER is not only a strong generative model in general, rivalling autoregressive models on several tasks spanning machine translation, summarization, and style transfer; it can also perform other varieties of generation that standard autoregressive models are not well-suited for. For instance, we demonstrate that DIFFUSER makes it possible for a user to condition generation on a prototype, or an incomplete sequence, and continue revising based on previous edit steps. 1 INTRODUCTION Revision and editing are central to how humans produce content; we write and revise emails and papers, gradually produce works of art, and iterate on plans for a project. Despite this, the most dominant paradigm in text generation is purely autoregressive, producing text left-to-right in a single pass (Bengio et al., 2003). Although models employing this single-pass form of generation are highly performant, they are limited by the inability to refine existing text. To address this, we propose DIFFUSER: Diffusion via Edit-based Reconstruction, a flexible method to apply edit-based generative processes to arbitrary text generation tasks. Specifically, we take inspiration from diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), generative models that generate by way of incremental denoising steps, and adapt this approach to the text generation paradigm with a formulation similar to natural editing processes. Prior work on text generation either focuses on improving the performance of standard autoregressive (AR) models through larger models and datasets (Vaswani et al., 2017; Sutskever et al., 2014; Radford et al.; Brown et al., 2020) or on proposing new, non-autoregressive approaches (Gu et al., 2017; Ghazvininejad et al., 2019; Gu et al., 2019) to improve general modes of text generation. A thus far separate line of models has taken the perspective of modeling text edits for specific tasks: e.g. style transfer (Reid & Zhong, 2021; Malmi et al., 2020), sentence fusion (Malmi et al., 2019), and grammatical error correction (Dale & Kilgarriff, 2011). DIFFUSER unifies these two perspectives by enabling edit processes to be applied to general purpose text generation without compromising performance or requiring external supervised data (Guu et al., 2018). This design enables it ∗Work done partially while at the University of Tokyo to both generate and edit text, including externally produced content, a natural extension of the text generation paradigm. DIFFUSER models text generation as a series of diffusion steps at the token level. This form of generation allows us to develop a synthetic formulation of natural editing processes (Reid & Neubig, 2022) using edit-based corruption and reconstruction. Our method starts from an arbitrary sequence (either a prototype generation, randomly sampled tokens, or a null sequence) and progressively edits it into the final sequence guided by the Levenshtein edit operations of INSERT, DELETE, KEEP, and REPLACE as shown in Figure 1. This enables flexible editing in a range of contexts, including machine translation, summarization, style transfer, while also allowing for the possibility of taking outside input to guide and constrain generation. Learning these edit-based diffusion processes required several innovations over standard autoregressive and MLM-style iterative generation approaches (Ghazvininejad et al., 2019; Austin et al., 2021; Savinov et al., 2022), including forming edit-based corruption and reconstruction processes for training (Sec 3), as well as techniques to improve the quality of decoding sequences across both timesteps and token-level generations (including 2D beam search; Sec 3.6, Sec 3.5). To demonstrate the effectiveness of DIFFUSER, we test our method on three text generation tasks: machine translation, abstractive summarization, and text style transfer, and show on-par or improved performance compared to purely autoregressive, single-pass and non-autoregressive methods. We also provide qualitative samples of the edit processes learned by the models in different settings and analyses on training and inference speeds, as well as the relationship between edit steps and performance. Overall, we demonstrate the potential of edit-based generative models to offer 1) more performant generation, 2) greater interactivity between different models (as we can now perform edits in the discrete space on model generated output), and 3) more flexible/controllable generation. 2 BACKGROUND DIFFUSER operates at the intersection of text generation, editing processes, and diffusion models. We first provide the background and intuition of these three techniques. 2.1 TEXT GENERATION Most text generation models used in NLP today are autoregressive in nature. In this paradigm, given a sequence s = [s0, s1, . . . , sN ], one can model the likelihood of the entire sequence P (s) by modeling the probability of predicting each token in an autoregressive, often left-to-right, manner. This formulation, where the likelihood of a token p(st) is conditioned on its predecessors s<t, is shown below (Bengio et al., 2003): P (s) = N∏ i=0 p(st|st−1, st−2, . . . , s0) (1) Models trained with this objective can then be sampled from, or searched over (e.g. using beam search), to provide generations in downstream tasks such as machine translation or summarization. Non-autoregressive models (Gu et al., 2017) are a different variety of generative models, in which a sequence is generated in a single pass (removing the autoregressive conditioning on previously generated tokens) with multiple revision-level passes, often in the name of efficiency. 2.2 EDITING PROCESSES Editing processes (Reid & Neubig, 2022) are a paradigm for modeling text by way of incremental revisions, taking inspiration from the the way humans generate text. Specifically, let X = {x0,x1, . . . ,xR} be a series of R versions of a document, where x0,xi,xR represents the initial, intermediate (at timestep t), and final/current state of a document, respectively. Using editing processes, we can model the probability of this series of documents versions occurring consecutively as follows: p(X) = R∏ i=0 p(xi|xi−10 ) (2) With this formulation, editing processes can also be used to calculate the probability of only the final document while taking into account previous revisions, which is not possible in the traditional text generation setup as intermediate revisions are not explicitly known, using the equation below (Reid & Neubig, 2022). p(xR) = ∑ X̃∈{x̃R0 |x̃R=xR} p(X̃). (3) 2.3 DIFFUSION MODELS We now make the connection between editing processes and diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020). Continuous diffusion processes are commonly applied in computer vision tasks to iteratively convert a sample of noise into an image. This can be seen as an edit process in which the model iteratively edits a noisy image to bring it closer to a final, complete image. These continuous diffusion models are often trained by modeling a Markov chain xT . . .xt . . .x0, where x0 represents the original image and xT represents Gaussian noise. This chain is typically produced by incrementally adding Gaussian noise to xt to form xt+1 (known as the forward or corruption process), wherein a model parameterised by pθ is trained to reverse (or “denoise”) this process to form the chain ∑T i=1 pθ(xt−1|xt). Analogized to text, this allows us to formulate natural edit processes as a discrete diffusion process in which a null string or a prototype is iteratively edited into free form text. Our DIFFUSER method (Figure 1) takes inspiration from this process, but parameterises the corruption process by way of sampled discrete edit operations applied over a discrete sequence of tokens. The success of our method supports the findings in the vision domain Bansal et al. (2022), where it is found that diffusion models can learn to invert arbirtary transformations. Previous work in diffusion models has largely focused on computer vision (Ho et al., 2020; Austin et al., 2021), in which the diffusion process is applied to raw image values. Within the context of natural language, both discrete diffusion models using only replacement operations (either applied to random tokens or masked tokens) (Savinov et al., 2022; Austin et al., 2021), and continuous diffusion over word embeddings (Li et al., 2022) have been proposed. Our model is a more flexible approach, using all four edit operations, towards diffusion models when compared with this work owing to its edit process formulation, and is also more compatible with current models (e.g. AR bootstrapping). 3 DIFFUSER DIFFUSER, being a diffusion-based method, has two main procedures: corruption and denoising. Unlike previous work (Ghazvininejad et al., 2019; Savinov et al., 2022; Gu et al., 2019) in which this procedure is relatively inflexible (e.g., due to length restrictions and/or using continuous representations for the basis of the diffusion process), both our corruption process and denoising process are based on Levenshtein operations, allowing our model to learn to take advantage of the flexibility of text editing when generating. 3.1 EDIT OPERATIONS Given the central role of the Levenshtein edit operations in our models, we provide a brief overview of each operation and its role in the editing process. We use Figure 1 as a guide when explaining each operation. INSERT: The insertion operation is used to add new text to a sequence. For example in Figure 1, “uses editing processes” is added by DiffusER at timestep xT−2. DELETE: The deletion operation erases existing text. In Figure 1, this is shown when “These” gets deleted at timestep xT−2 → xT−3. REPLACE: The replacement operation works overwriting existing text with new text. This is shown in Figure 1 at step xT → xT−1 where “filter Toronto guilty trough feel” is replaced by “These model guilty named DiffusER”. KEEP: The keep operation ensures that a portion of the text remains unchanged into the next iteration. This is illustrated in timestep xT−2 → xT−3 where “model named DiffusER” is kept. 3.2 EDIT-BASED CORRUPTION The four Levenshtein edit operations described above allow us to transform any arbitrary sequence of tokens into another. This is in contrast to iterative mask replacement, which can only introduce new tokens (Ghazvininejad et al., 2019; Austin et al., 2021; Savinov et al., 2022). For every timestep i, corruption process q(xi|xi−1; Et, El) is parameterized by two distributions: the distribution over edit types Et (e.g. 60% keep, 20% replace, 10% delete, 10% insert), and the distribution over edit length El. The latter can be parameterized by any distribution over non-negative integers, such as a uniform distribution or a Poisson distribution. For instance, to learn a deletion operation in the reconstruction process, we insert randomly sampled distractor tokens, whereas, to learn an insertion operation we delete a subset of tokens contained in the sequence. 3.3 EDIT-BASED RECONSTRUCTION Our generative process is trained via the Edit-based Reconstruction (ER) process. ER can be thought of as the opposite of our corruption process, in which we need to find the appropriate edit operations to transform xT to x0, by way of xT−1, . . . ,x1. That is, given a corrupted sequence xT , we aim to learn the process by which we can reverse the corruption in the following form. Pθ(x0) = T∏ t=0 pθ(xt−1|xt) (4) Given that, we model the likelihood of each timestep xt, this can also be referred to as an edit process (Reid & Neubig, 2022). As we include an edit process in our model and use Levenshtein tags for editing, one can think of ER as two distinct steps: identify which edits should take place (tagging process) and deciding which tokens should go in these positions (generative process). This decomposition is shown here: pθ(xt−1|xt) = ptagθ (et|xt)p gen θ (xt−1|xt,et) (5) where ptagθ parameterises the tagging model to estimate the likelihood of producing a given set of Levenshtein edit operations {INSERT ,DELETE ,KEEP ,REPLACE } given xt, and pgenθ parametersies the generator model given sequence xt and edit operations et. This decomposition via editoperations allows the generation process to be more controllable and more flexible as it allows up to explicitly specify edit types associated with tokens to be edited, rather than leaving both processes to be implicit. 3.4 IMPLEMENTING DIFFUSER WITH TRANSFORMERS When implemented with Transformers (Vaswani et al., 2017), DIFFUSER consists of two components: a tagger and generator. The tagger, a transformer network, is trained using cross-entropy loss over the ground-truth tag types to predict the edit operations that should be applied to the sequence, in preparation for the next generation step. Then, in the generation step, after removing tokens selected for deletion, we sum a learned embedding to insert and replace types and generate the inserted and replaced sequences autoregressively. Following this, we feed the output of this diffusion step into the tagger and perform another diffusion step. One step of this process can be compared to the reconstruction process used in Aghajanyan et al. (2022). 3.5 DECODING METHODS DIFFUSER has an inherently different generation process from a standard autoregressive language generation model—in addition to operating on a sequence/token level (in which generation is composed of generating individual tokens in a single-revision; intra-revision), we also operate on a revision level (in which the text is expanded across diffusion steps, inter-revision). This allows us to experiment with different methods for decoding on both the intra-revision (single sequence level) and inter-revision levels (multiple version level), which we explain below. Beam Search One method for decoding is to perform beam search over b hypotheses at every step on the output of our autoregressive generator (intra-revision level), while performing greedy decoding at the inter-revision level. Although being conceptually straightforward, this method has the limitation of not searching over the inter-revision space (despite revisions being a key component of our approach). 2D Beam Search We propose 2D beam search, in which we extend beam search as it is applied to token-level autoregressive generative models, and perform beam search using both an intra-revision width of b and an inter-revision beam width of r. This allows us to perform search on the interrevision level, which we find results in better downstream performance, but increases the beam count to r × b beams. Assuming a fixed sequence length and maximum number of diffusion steps, we would decode as follows: We first use beam search with width b at the token level and take the r most likely candidates (measured with log-likelihood). These r candidates are then fed to the next step of the diffusion model, wherein for each of r hypotheses the next diffusion step is performed with the token-level generator decoding with beam width of b. This leads us to have r× b candidate hypotheses, of which we take the top r. This process repeats for each diffusion step thereafter. Nucleus Sampling To improve the diversity of generations, we also consider a nucleus sampling based approach, where at every timestep xt, we use nucleus sampling (Holtzman et al., 2019) with p = 0.6 to sample each token autoregressively at the intra-revision level, and greedily decode at the inter-revision level (i.e. no search or sampling is performed over multiple diffusion steps). 3.6 DECODER INITIALIZATION TECHNIQUES Since our model is based on edit processes, it offers flexibility in terms of the discrete sequence from which to initialize the text generation. Previous work on non-autoregressive translation often starts with [MASK] tokens (Ghazvininejad et al., 2019), a null string (Gu et al., 2019) or random tokens (Savinov et al., 2022). We include the latter two methods in our experiments, in addition to (1) experimenting with an AR Bootstrap, in which we learn to bootstrap from text generated by a purely autoregressive model, and (2) proposing to use the source-side text as an initial state for the DIFFUSER decoder. Null Sequence In this setting, we simply initialize DIFFUSER with a null string, in which the first edit is constrained to be insertion. Random Tokens In this setting, we initialize DIFFUSER with a series of random tokens, following (Savinov et al., 2022). The model then learns to edit this random sequence. AR Bootstrap We bootstrap the reverse diffusion process by taking the output of DIFFUSER constrained to generate autoregressively (essentially mimicking a standard autoregressive generator). We then use DIFFUSER to further edit the output of this operation. Source Bootstrap In a sequence-to-sequence setting, we can also generate by bootstrapping using the source text, by setting xT to be equivalent to s. As we show in later sections, this is particularly useful in tasks such as summarization in which the output can be easily formulated as an editing version of the input. 4 EXPERIMENTS 4.1 MODELS DIFFUSER We instantiate DIFFUSER with two separate Transformer models for the tagger and generator. We use the Transformer-base encoder-decoder (Vaswani et al., 2017) architecture, with 6 layers, for the a hidden dimension of 512, feedforward dimension of 2048, 8 attention heads, and dropout p = 0.3. Baselines (MT & Summ) We use several Transformer baselines from previous literature for our various tasks. We include a conventional 6-layer encoder-decoder Transformer model from Vaswani et al. (2017), as well as models proposed in related work from the non-autoregressive generation literature: Levensthein Transformer (Gu et al., 2019), CMLM (Ghazvininejad et al., 2019), DisCo (Kasai et al., 2020a), Imputer (Saharia et al., 2020), and SUNDAE (Savinov et al., 2022). 4.2 TASKS Machine Translation We use the WMT’14 English-German dataset for our machine translation experiments. We use the same preprocessing and post-processing steps as Ghazvininejad et al. (2019). Unlike the standard in non-autoregressive translation work (Zhou et al., 2019), we focus on using the gold machine translation data instead of distilled data. We use a Poisson distribution El(λ = 3) over edit operation lengths in our corruption process. Note that we compute the edit operations over words rather than tokens. For this task, as well as the following ones, we use 12 diffusion steps, b = 5, and r = 3 for beam search, and Et(60% KEEP, 20% REPLACE, 10% INSERT, 10% DELETE) based on numbers from preliminary experiments. Summarization We also benchmark on the CNN/DailyMail dataset for summarization (Nallapati et al., 2016). Summarization is different in nature from machine translation in that it can be described as more conducive to edits as a good summary tends to preserve many parts of the input. We use the same post-processing steps as See et al. (2017). We use a Poisson distribution El(λ = 8) over edit operation lengths in our corruption process (to roughly model sentence boundaries). Text Style Transfer We perform experiments using the Yelp (Shen et al., 2017) dataset for the unsupervised text-style transfer task. We compare against methods such as Tag-and-Generate (Madaan et al., 2020), Masker (Malmi et al., 2020), and LEWIS (Reid & Zhong, 2021). In contrast with machine translation and summarization, text style transfer datasets are often unaligned (i.e. without source-target pairs) leading to the prominence of unsupervised text style transfer methods. We propose a method of performing unsupervised text style transfer using DIFFUSER, following the synthetic generation method in Reid & Zhong (2021). We train two separate, style-specific (e.g. positive and negative) DIFFUSER models on the style-specific data. We then perform transfer at test time, feeding text from each style into the model trained to edit in the opposite style (e.g. positive text → negative DIFFUSER model; negative text → positive DIFFUSER model). Following standard practice, we measure performance with BLEU, Self-BLEU and Accuracy (based on a classifier trained to disambiguate between different styles of text; we use the classifier from Reid & Zhong (2021)). 4.3 RESULTS Main Results We summarize our main results on both machine translation and summarization in Table 1. As can be seen, for both machine translation and summarization tasks, DIFFUSER, using 12 diffusion steps, outperforms all non-autoregressive baselines1 and rivals or outperforms the fully autoregressive model. Particularly interesting is how the various methods of initializing our model (i.e. AR Bootstrap and Source Bootstrap) can further improve performance well beyond the autoregressive baseline, depending on the task. We can see that for summarization, bootstrapping from the source input is more effective than bootstrapping from an abstractive autoregressive model. However, for both tasks, unlike many non-autoregressive methods, we show that DIFFUSER is complementary with token-level autoregressive methods and can be used naturally in conjunction with them. Style Transfer Results We also perform unsupervised text style transfer using our DIFFUSER models using the Yelp (Shen et al., 2017) dataset. The results can be seen in Table 2. We show that even without task-specific techniques (such as synthetic data generation and classifier based stylespecific token identification), we still have competitive performance with state of the art methods. 4.4 ANALYSIS 1We were not able to reproduce the published results of the Levenshtein Transformer using their code, hence our reported BLEU score of 23.7 is slightly lower than that of 25.2 reported in Gu et al. (2019) We perform additional analyses on DIFFUSER, specifically focusing on the decoding method, the number of iterations versus the final BLEU score, and also a qualitative analysis of how text changes at every step. Decoding Method Ablation We perform an ablation of the decoding method, using DIFFUSER for 12 steps (as used in our main results) and showing results when comparing greedy decoding, (1D) beam search, nucleus decoding, and 2D beam search. We show that 2D-beam search tends to perform the best, likely because it searches over multiple diffusion steps, while other methods (greedy, beam, nucleus) are still competitive. Number of Edit Steps versus Performance We perform an analysis where we compare the number of timesteps in our denoising diffusion process and the final BLEU score on WMT’14 En-De when using 2D-Beam Search and random token initialization in Figure 4. Here it can be seen that most performance gains are in the initial diffusion timesteps (0-10), with diminishing gains (for machine translation) or gradual losses (for summarization) between 10 and 30, after which performance marginally decreases towards 60 steps. How does text change every step? We include a qualitative sample from our DIFFUSER summarization model (Table 4). We find that DIFFUSER learns edit processes intuitive to the task at hand: namely largely deleting portions and making minor edits to the remaining text (similar to how a human may perform summarization given a news article). Time comparsion between decoding methods We also measure the impact of the various decoding algorithms we used with results shown in Figure 3. Beam search and 2D-Beam Search performs significantly slower than greedy and nucleus sampling, demonstrating the potential for improved decoding algorithms tailored for improving the trade-off between efficiency and accuracy in diffusion models. 5 RELATED WORK Non-Autoregressive Generation Work in machine translation has explored non/semiautoregressive generation (Gu et al., 2017; Lee et al., 2018), which often includes an iterative refinement step (Lee et al., 2018; Ghazvininejad et al., 2019; Kasai et al., 2020a; Gu et al., 2019). Previous methods in this space are often highly specialized underperform non-autoregressive methods due to the constraints imposed on generation for efficiency. This being said, Kasai et al. (2020b) demonstrated that non-autoregressive models are actually comparable in speed when using a larger batch size instead of 1. Our method allows us to hone in on the notion of iterative refinement by way of editing processes, and is also relatively general, allowing us to combine DIFFUSER with standard autoregressive models. Learning Properties of Edits Previous work has also looked at studying or exploiting the properties of edits. This was initially worked on in the context of vector representation learning of edits (Yin et al., 2019; Marrese-Taylor et al., 2021). Concurrently, a line of work has used edits for specific tasks such as sentence fusion, style transfer and grammatical error correction (Malmi et al., 2019; 2020; Reid & Zhong, 2021; Omelianchuk et al., 2020). Recent work has proposed editing processes (Reid & Neubig, 2022), in which document generation is looked at through the lens of its revision history, rather than just at a token level. We take inspiration from this work and devise a process by which arbitrary text generation tasks can be fitted into this framework. 6 CONCLUSIONS We proposed DIFFUSER, an diffusion-based generative model for text using edits. DIFFUSER shows improvements across the tasks considered (machine translation, summarization, style transfer), with improved generative flexibility via incremental text improvement, and compatibility with standard autoregressive models. We hope that DIFFUSER with spur research on edit-based generative models, with further potentials including how we can leverage edits to ensemble models (regardless of parameter count) in the discrete space. ACKNOWLEDGEMENTS We thank Armen Aghajanyan, Daniel Fried, Edison Marrese-Taylor, Eric Wallace, and Luke Zettlemoyer for their helpful comments in early discussions. We thank Ari Holtzman, Jungo Kasai, Aman Madaan, and Eric Wallace for feedback and proofreading the draft of this paper.
1. What is the main contribution of the paper, and how does it build upon prior work in iterative text generation models and diffusion models? 2. What are the strengths and weaknesses of the proposed Diffuser model, particularly regarding its ability to refine text and be bootstrapped by autoregressive translation or summarization? 3. How does the paper demonstrate competitive performance in machine translation and summarization tasks, and what advantages does the Diffuser model offer over other task-specific models? 4. What additional supplementary material could enhance the paper's depth and provide a more balanced presentation of the approach's strengths and limitations? 5. How might the model be modified or manipulated to exercise more flexible control over generation, and what ablations or demonstrations could showcase this capability? 6. How does the paper situate the Diffuser model within the context of previous related work, and what comparisons or discussions could help readers better understand the similarities and differences between these approaches?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Motivated by how humans revise content and the success of diffusion models with continuous inputs, the authors propose a generative model of text based on multiple editing steps, with each step based on one or more text span editing operations (insert, delete, replace, keep). The generative model is trained to invert, step by step, the random edits of a "text diffusion process", which is specified by a prior over edits and edit length. Generation at each step is further decomposed into a model for generating the edit operations to apply at each position of the current input text, and a model for generating text given the input and the selected edit operations, similarly to many existing edit-based generation approaches. Results on machine translation (WMT 14') and summarization (CNN Daily Mail) show competitive performance (Table 1). An advantage of the proposed Diffuser model can refine text and so can be boostrapped by an autoregressive translation or the source to summarize, which boosts performance (Table 1). The authors also show that by simply training 2 models for positive and negative sentiment Yelp reviews and then using them to transform the input text to the target sentiment, performance is competitive with SOTA task specific models, which is impressive. Some ablations around decoding method vs. speed and performance and seed text type are also included. Strengths And Weaknesses Strengths The model is intuitively appealing, and the paper in general is well written. The experiments demonstrate competitive performance, and the advantages of being able to bootstrap and refine results. Limitations While the paper well written, and establishes the approach well, it could benefit tremendously from some additional supplementary material, to give the paper more depth, as discussed in the following points. The work is well executed and the model intuitive, but in context, an obvious next step given current work on iterative text generation models, and the recent success of diffusion models. More credit should be given to previous related work that establishes similar reconstruction and corruption processes, similar two-stage editing, and similar decoding processes. These are not novel components of this paper as suggested by the current manuscript. Related, I'd like the authors, to the extent possible, to discuss the similarities/differences advantages/disadvantages over similar text editing models, such as the Levenshtien transformer. Why would it not perform as well as Diffuser? More flexible control over generation is claimed as an advantage of the model, but this is really never demonstrated or exercised. While I can imagine variations of the model that could deliver this, the models investigated are trained and utilized end-to-end, with intermediate generations that have no notion of validity associated with them, and internal decisions that, if manipulated, would likely degrade performance. Related, investigating the generalization of and effects of manipulating the editing priors at test time to exercise some control and/or diversify outputs would be an interesting ablation/demonstration. Statements like "editing processes can also be used to calculate the probability of only the final document while taking into account previous revisions, which is not possible in the traditional text generation setup", are misleading. Under the model intermediate revisions are meaningless, and conventional LMs can evaluate likelihood in a single pass (i.e., having to go through a revision process for likelihood evaluation is a disadvantage of the model). Generally speaking, a more balanced and frank presentation of the strengths and limitations of the approach, properly situated within context with previous work, would improve the paper signficantly. Equation 5 has errors, please correct. Clarity, Quality, Novelty And Reproducibility See S&W section.
ICLR
Title DiffusER: Diffusion via Edit-based Reconstruction Abstract In text generation, models that generate text from scratch one token at a time are currently the dominant paradigm. Despite being performant, these models lack the ability to revise existing text, which limits their usability in many practical scenarios. We look to address this, with DIFFUSER (Diffusion via Edit-based Reconstruction), a new edit-based generative model for text based on denoising diffusion models – a class of models that use a Markov chain of denoising steps to incrementally generate data. DIFFUSER is not only a strong generative model in general, rivalling autoregressive models on several tasks spanning machine translation, summarization, and style transfer; it can also perform other varieties of generation that standard autoregressive models are not well-suited for. For instance, we demonstrate that DIFFUSER makes it possible for a user to condition generation on a prototype, or an incomplete sequence, and continue revising based on previous edit steps. 1 INTRODUCTION Revision and editing are central to how humans produce content; we write and revise emails and papers, gradually produce works of art, and iterate on plans for a project. Despite this, the most dominant paradigm in text generation is purely autoregressive, producing text left-to-right in a single pass (Bengio et al., 2003). Although models employing this single-pass form of generation are highly performant, they are limited by the inability to refine existing text. To address this, we propose DIFFUSER: Diffusion via Edit-based Reconstruction, a flexible method to apply edit-based generative processes to arbitrary text generation tasks. Specifically, we take inspiration from diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), generative models that generate by way of incremental denoising steps, and adapt this approach to the text generation paradigm with a formulation similar to natural editing processes. Prior work on text generation either focuses on improving the performance of standard autoregressive (AR) models through larger models and datasets (Vaswani et al., 2017; Sutskever et al., 2014; Radford et al.; Brown et al., 2020) or on proposing new, non-autoregressive approaches (Gu et al., 2017; Ghazvininejad et al., 2019; Gu et al., 2019) to improve general modes of text generation. A thus far separate line of models has taken the perspective of modeling text edits for specific tasks: e.g. style transfer (Reid & Zhong, 2021; Malmi et al., 2020), sentence fusion (Malmi et al., 2019), and grammatical error correction (Dale & Kilgarriff, 2011). DIFFUSER unifies these two perspectives by enabling edit processes to be applied to general purpose text generation without compromising performance or requiring external supervised data (Guu et al., 2018). This design enables it ∗Work done partially while at the University of Tokyo to both generate and edit text, including externally produced content, a natural extension of the text generation paradigm. DIFFUSER models text generation as a series of diffusion steps at the token level. This form of generation allows us to develop a synthetic formulation of natural editing processes (Reid & Neubig, 2022) using edit-based corruption and reconstruction. Our method starts from an arbitrary sequence (either a prototype generation, randomly sampled tokens, or a null sequence) and progressively edits it into the final sequence guided by the Levenshtein edit operations of INSERT, DELETE, KEEP, and REPLACE as shown in Figure 1. This enables flexible editing in a range of contexts, including machine translation, summarization, style transfer, while also allowing for the possibility of taking outside input to guide and constrain generation. Learning these edit-based diffusion processes required several innovations over standard autoregressive and MLM-style iterative generation approaches (Ghazvininejad et al., 2019; Austin et al., 2021; Savinov et al., 2022), including forming edit-based corruption and reconstruction processes for training (Sec 3), as well as techniques to improve the quality of decoding sequences across both timesteps and token-level generations (including 2D beam search; Sec 3.6, Sec 3.5). To demonstrate the effectiveness of DIFFUSER, we test our method on three text generation tasks: machine translation, abstractive summarization, and text style transfer, and show on-par or improved performance compared to purely autoregressive, single-pass and non-autoregressive methods. We also provide qualitative samples of the edit processes learned by the models in different settings and analyses on training and inference speeds, as well as the relationship between edit steps and performance. Overall, we demonstrate the potential of edit-based generative models to offer 1) more performant generation, 2) greater interactivity between different models (as we can now perform edits in the discrete space on model generated output), and 3) more flexible/controllable generation. 2 BACKGROUND DIFFUSER operates at the intersection of text generation, editing processes, and diffusion models. We first provide the background and intuition of these three techniques. 2.1 TEXT GENERATION Most text generation models used in NLP today are autoregressive in nature. In this paradigm, given a sequence s = [s0, s1, . . . , sN ], one can model the likelihood of the entire sequence P (s) by modeling the probability of predicting each token in an autoregressive, often left-to-right, manner. This formulation, where the likelihood of a token p(st) is conditioned on its predecessors s<t, is shown below (Bengio et al., 2003): P (s) = N∏ i=0 p(st|st−1, st−2, . . . , s0) (1) Models trained with this objective can then be sampled from, or searched over (e.g. using beam search), to provide generations in downstream tasks such as machine translation or summarization. Non-autoregressive models (Gu et al., 2017) are a different variety of generative models, in which a sequence is generated in a single pass (removing the autoregressive conditioning on previously generated tokens) with multiple revision-level passes, often in the name of efficiency. 2.2 EDITING PROCESSES Editing processes (Reid & Neubig, 2022) are a paradigm for modeling text by way of incremental revisions, taking inspiration from the the way humans generate text. Specifically, let X = {x0,x1, . . . ,xR} be a series of R versions of a document, where x0,xi,xR represents the initial, intermediate (at timestep t), and final/current state of a document, respectively. Using editing processes, we can model the probability of this series of documents versions occurring consecutively as follows: p(X) = R∏ i=0 p(xi|xi−10 ) (2) With this formulation, editing processes can also be used to calculate the probability of only the final document while taking into account previous revisions, which is not possible in the traditional text generation setup as intermediate revisions are not explicitly known, using the equation below (Reid & Neubig, 2022). p(xR) = ∑ X̃∈{x̃R0 |x̃R=xR} p(X̃). (3) 2.3 DIFFUSION MODELS We now make the connection between editing processes and diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020). Continuous diffusion processes are commonly applied in computer vision tasks to iteratively convert a sample of noise into an image. This can be seen as an edit process in which the model iteratively edits a noisy image to bring it closer to a final, complete image. These continuous diffusion models are often trained by modeling a Markov chain xT . . .xt . . .x0, where x0 represents the original image and xT represents Gaussian noise. This chain is typically produced by incrementally adding Gaussian noise to xt to form xt+1 (known as the forward or corruption process), wherein a model parameterised by pθ is trained to reverse (or “denoise”) this process to form the chain ∑T i=1 pθ(xt−1|xt). Analogized to text, this allows us to formulate natural edit processes as a discrete diffusion process in which a null string or a prototype is iteratively edited into free form text. Our DIFFUSER method (Figure 1) takes inspiration from this process, but parameterises the corruption process by way of sampled discrete edit operations applied over a discrete sequence of tokens. The success of our method supports the findings in the vision domain Bansal et al. (2022), where it is found that diffusion models can learn to invert arbirtary transformations. Previous work in diffusion models has largely focused on computer vision (Ho et al., 2020; Austin et al., 2021), in which the diffusion process is applied to raw image values. Within the context of natural language, both discrete diffusion models using only replacement operations (either applied to random tokens or masked tokens) (Savinov et al., 2022; Austin et al., 2021), and continuous diffusion over word embeddings (Li et al., 2022) have been proposed. Our model is a more flexible approach, using all four edit operations, towards diffusion models when compared with this work owing to its edit process formulation, and is also more compatible with current models (e.g. AR bootstrapping). 3 DIFFUSER DIFFUSER, being a diffusion-based method, has two main procedures: corruption and denoising. Unlike previous work (Ghazvininejad et al., 2019; Savinov et al., 2022; Gu et al., 2019) in which this procedure is relatively inflexible (e.g., due to length restrictions and/or using continuous representations for the basis of the diffusion process), both our corruption process and denoising process are based on Levenshtein operations, allowing our model to learn to take advantage of the flexibility of text editing when generating. 3.1 EDIT OPERATIONS Given the central role of the Levenshtein edit operations in our models, we provide a brief overview of each operation and its role in the editing process. We use Figure 1 as a guide when explaining each operation. INSERT: The insertion operation is used to add new text to a sequence. For example in Figure 1, “uses editing processes” is added by DiffusER at timestep xT−2. DELETE: The deletion operation erases existing text. In Figure 1, this is shown when “These” gets deleted at timestep xT−2 → xT−3. REPLACE: The replacement operation works overwriting existing text with new text. This is shown in Figure 1 at step xT → xT−1 where “filter Toronto guilty trough feel” is replaced by “These model guilty named DiffusER”. KEEP: The keep operation ensures that a portion of the text remains unchanged into the next iteration. This is illustrated in timestep xT−2 → xT−3 where “model named DiffusER” is kept. 3.2 EDIT-BASED CORRUPTION The four Levenshtein edit operations described above allow us to transform any arbitrary sequence of tokens into another. This is in contrast to iterative mask replacement, which can only introduce new tokens (Ghazvininejad et al., 2019; Austin et al., 2021; Savinov et al., 2022). For every timestep i, corruption process q(xi|xi−1; Et, El) is parameterized by two distributions: the distribution over edit types Et (e.g. 60% keep, 20% replace, 10% delete, 10% insert), and the distribution over edit length El. The latter can be parameterized by any distribution over non-negative integers, such as a uniform distribution or a Poisson distribution. For instance, to learn a deletion operation in the reconstruction process, we insert randomly sampled distractor tokens, whereas, to learn an insertion operation we delete a subset of tokens contained in the sequence. 3.3 EDIT-BASED RECONSTRUCTION Our generative process is trained via the Edit-based Reconstruction (ER) process. ER can be thought of as the opposite of our corruption process, in which we need to find the appropriate edit operations to transform xT to x0, by way of xT−1, . . . ,x1. That is, given a corrupted sequence xT , we aim to learn the process by which we can reverse the corruption in the following form. Pθ(x0) = T∏ t=0 pθ(xt−1|xt) (4) Given that, we model the likelihood of each timestep xt, this can also be referred to as an edit process (Reid & Neubig, 2022). As we include an edit process in our model and use Levenshtein tags for editing, one can think of ER as two distinct steps: identify which edits should take place (tagging process) and deciding which tokens should go in these positions (generative process). This decomposition is shown here: pθ(xt−1|xt) = ptagθ (et|xt)p gen θ (xt−1|xt,et) (5) where ptagθ parameterises the tagging model to estimate the likelihood of producing a given set of Levenshtein edit operations {INSERT ,DELETE ,KEEP ,REPLACE } given xt, and pgenθ parametersies the generator model given sequence xt and edit operations et. This decomposition via editoperations allows the generation process to be more controllable and more flexible as it allows up to explicitly specify edit types associated with tokens to be edited, rather than leaving both processes to be implicit. 3.4 IMPLEMENTING DIFFUSER WITH TRANSFORMERS When implemented with Transformers (Vaswani et al., 2017), DIFFUSER consists of two components: a tagger and generator. The tagger, a transformer network, is trained using cross-entropy loss over the ground-truth tag types to predict the edit operations that should be applied to the sequence, in preparation for the next generation step. Then, in the generation step, after removing tokens selected for deletion, we sum a learned embedding to insert and replace types and generate the inserted and replaced sequences autoregressively. Following this, we feed the output of this diffusion step into the tagger and perform another diffusion step. One step of this process can be compared to the reconstruction process used in Aghajanyan et al. (2022). 3.5 DECODING METHODS DIFFUSER has an inherently different generation process from a standard autoregressive language generation model—in addition to operating on a sequence/token level (in which generation is composed of generating individual tokens in a single-revision; intra-revision), we also operate on a revision level (in which the text is expanded across diffusion steps, inter-revision). This allows us to experiment with different methods for decoding on both the intra-revision (single sequence level) and inter-revision levels (multiple version level), which we explain below. Beam Search One method for decoding is to perform beam search over b hypotheses at every step on the output of our autoregressive generator (intra-revision level), while performing greedy decoding at the inter-revision level. Although being conceptually straightforward, this method has the limitation of not searching over the inter-revision space (despite revisions being a key component of our approach). 2D Beam Search We propose 2D beam search, in which we extend beam search as it is applied to token-level autoregressive generative models, and perform beam search using both an intra-revision width of b and an inter-revision beam width of r. This allows us to perform search on the interrevision level, which we find results in better downstream performance, but increases the beam count to r × b beams. Assuming a fixed sequence length and maximum number of diffusion steps, we would decode as follows: We first use beam search with width b at the token level and take the r most likely candidates (measured with log-likelihood). These r candidates are then fed to the next step of the diffusion model, wherein for each of r hypotheses the next diffusion step is performed with the token-level generator decoding with beam width of b. This leads us to have r× b candidate hypotheses, of which we take the top r. This process repeats for each diffusion step thereafter. Nucleus Sampling To improve the diversity of generations, we also consider a nucleus sampling based approach, where at every timestep xt, we use nucleus sampling (Holtzman et al., 2019) with p = 0.6 to sample each token autoregressively at the intra-revision level, and greedily decode at the inter-revision level (i.e. no search or sampling is performed over multiple diffusion steps). 3.6 DECODER INITIALIZATION TECHNIQUES Since our model is based on edit processes, it offers flexibility in terms of the discrete sequence from which to initialize the text generation. Previous work on non-autoregressive translation often starts with [MASK] tokens (Ghazvininejad et al., 2019), a null string (Gu et al., 2019) or random tokens (Savinov et al., 2022). We include the latter two methods in our experiments, in addition to (1) experimenting with an AR Bootstrap, in which we learn to bootstrap from text generated by a purely autoregressive model, and (2) proposing to use the source-side text as an initial state for the DIFFUSER decoder. Null Sequence In this setting, we simply initialize DIFFUSER with a null string, in which the first edit is constrained to be insertion. Random Tokens In this setting, we initialize DIFFUSER with a series of random tokens, following (Savinov et al., 2022). The model then learns to edit this random sequence. AR Bootstrap We bootstrap the reverse diffusion process by taking the output of DIFFUSER constrained to generate autoregressively (essentially mimicking a standard autoregressive generator). We then use DIFFUSER to further edit the output of this operation. Source Bootstrap In a sequence-to-sequence setting, we can also generate by bootstrapping using the source text, by setting xT to be equivalent to s. As we show in later sections, this is particularly useful in tasks such as summarization in which the output can be easily formulated as an editing version of the input. 4 EXPERIMENTS 4.1 MODELS DIFFUSER We instantiate DIFFUSER with two separate Transformer models for the tagger and generator. We use the Transformer-base encoder-decoder (Vaswani et al., 2017) architecture, with 6 layers, for the a hidden dimension of 512, feedforward dimension of 2048, 8 attention heads, and dropout p = 0.3. Baselines (MT & Summ) We use several Transformer baselines from previous literature for our various tasks. We include a conventional 6-layer encoder-decoder Transformer model from Vaswani et al. (2017), as well as models proposed in related work from the non-autoregressive generation literature: Levensthein Transformer (Gu et al., 2019), CMLM (Ghazvininejad et al., 2019), DisCo (Kasai et al., 2020a), Imputer (Saharia et al., 2020), and SUNDAE (Savinov et al., 2022). 4.2 TASKS Machine Translation We use the WMT’14 English-German dataset for our machine translation experiments. We use the same preprocessing and post-processing steps as Ghazvininejad et al. (2019). Unlike the standard in non-autoregressive translation work (Zhou et al., 2019), we focus on using the gold machine translation data instead of distilled data. We use a Poisson distribution El(λ = 3) over edit operation lengths in our corruption process. Note that we compute the edit operations over words rather than tokens. For this task, as well as the following ones, we use 12 diffusion steps, b = 5, and r = 3 for beam search, and Et(60% KEEP, 20% REPLACE, 10% INSERT, 10% DELETE) based on numbers from preliminary experiments. Summarization We also benchmark on the CNN/DailyMail dataset for summarization (Nallapati et al., 2016). Summarization is different in nature from machine translation in that it can be described as more conducive to edits as a good summary tends to preserve many parts of the input. We use the same post-processing steps as See et al. (2017). We use a Poisson distribution El(λ = 8) over edit operation lengths in our corruption process (to roughly model sentence boundaries). Text Style Transfer We perform experiments using the Yelp (Shen et al., 2017) dataset for the unsupervised text-style transfer task. We compare against methods such as Tag-and-Generate (Madaan et al., 2020), Masker (Malmi et al., 2020), and LEWIS (Reid & Zhong, 2021). In contrast with machine translation and summarization, text style transfer datasets are often unaligned (i.e. without source-target pairs) leading to the prominence of unsupervised text style transfer methods. We propose a method of performing unsupervised text style transfer using DIFFUSER, following the synthetic generation method in Reid & Zhong (2021). We train two separate, style-specific (e.g. positive and negative) DIFFUSER models on the style-specific data. We then perform transfer at test time, feeding text from each style into the model trained to edit in the opposite style (e.g. positive text → negative DIFFUSER model; negative text → positive DIFFUSER model). Following standard practice, we measure performance with BLEU, Self-BLEU and Accuracy (based on a classifier trained to disambiguate between different styles of text; we use the classifier from Reid & Zhong (2021)). 4.3 RESULTS Main Results We summarize our main results on both machine translation and summarization in Table 1. As can be seen, for both machine translation and summarization tasks, DIFFUSER, using 12 diffusion steps, outperforms all non-autoregressive baselines1 and rivals or outperforms the fully autoregressive model. Particularly interesting is how the various methods of initializing our model (i.e. AR Bootstrap and Source Bootstrap) can further improve performance well beyond the autoregressive baseline, depending on the task. We can see that for summarization, bootstrapping from the source input is more effective than bootstrapping from an abstractive autoregressive model. However, for both tasks, unlike many non-autoregressive methods, we show that DIFFUSER is complementary with token-level autoregressive methods and can be used naturally in conjunction with them. Style Transfer Results We also perform unsupervised text style transfer using our DIFFUSER models using the Yelp (Shen et al., 2017) dataset. The results can be seen in Table 2. We show that even without task-specific techniques (such as synthetic data generation and classifier based stylespecific token identification), we still have competitive performance with state of the art methods. 4.4 ANALYSIS 1We were not able to reproduce the published results of the Levenshtein Transformer using their code, hence our reported BLEU score of 23.7 is slightly lower than that of 25.2 reported in Gu et al. (2019) We perform additional analyses on DIFFUSER, specifically focusing on the decoding method, the number of iterations versus the final BLEU score, and also a qualitative analysis of how text changes at every step. Decoding Method Ablation We perform an ablation of the decoding method, using DIFFUSER for 12 steps (as used in our main results) and showing results when comparing greedy decoding, (1D) beam search, nucleus decoding, and 2D beam search. We show that 2D-beam search tends to perform the best, likely because it searches over multiple diffusion steps, while other methods (greedy, beam, nucleus) are still competitive. Number of Edit Steps versus Performance We perform an analysis where we compare the number of timesteps in our denoising diffusion process and the final BLEU score on WMT’14 En-De when using 2D-Beam Search and random token initialization in Figure 4. Here it can be seen that most performance gains are in the initial diffusion timesteps (0-10), with diminishing gains (for machine translation) or gradual losses (for summarization) between 10 and 30, after which performance marginally decreases towards 60 steps. How does text change every step? We include a qualitative sample from our DIFFUSER summarization model (Table 4). We find that DIFFUSER learns edit processes intuitive to the task at hand: namely largely deleting portions and making minor edits to the remaining text (similar to how a human may perform summarization given a news article). Time comparsion between decoding methods We also measure the impact of the various decoding algorithms we used with results shown in Figure 3. Beam search and 2D-Beam Search performs significantly slower than greedy and nucleus sampling, demonstrating the potential for improved decoding algorithms tailored for improving the trade-off between efficiency and accuracy in diffusion models. 5 RELATED WORK Non-Autoregressive Generation Work in machine translation has explored non/semiautoregressive generation (Gu et al., 2017; Lee et al., 2018), which often includes an iterative refinement step (Lee et al., 2018; Ghazvininejad et al., 2019; Kasai et al., 2020a; Gu et al., 2019). Previous methods in this space are often highly specialized underperform non-autoregressive methods due to the constraints imposed on generation for efficiency. This being said, Kasai et al. (2020b) demonstrated that non-autoregressive models are actually comparable in speed when using a larger batch size instead of 1. Our method allows us to hone in on the notion of iterative refinement by way of editing processes, and is also relatively general, allowing us to combine DIFFUSER with standard autoregressive models. Learning Properties of Edits Previous work has also looked at studying or exploiting the properties of edits. This was initially worked on in the context of vector representation learning of edits (Yin et al., 2019; Marrese-Taylor et al., 2021). Concurrently, a line of work has used edits for specific tasks such as sentence fusion, style transfer and grammatical error correction (Malmi et al., 2019; 2020; Reid & Zhong, 2021; Omelianchuk et al., 2020). Recent work has proposed editing processes (Reid & Neubig, 2022), in which document generation is looked at through the lens of its revision history, rather than just at a token level. We take inspiration from this work and devise a process by which arbitrary text generation tasks can be fitted into this framework. 6 CONCLUSIONS We proposed DIFFUSER, an diffusion-based generative model for text using edits. DIFFUSER shows improvements across the tasks considered (machine translation, summarization, style transfer), with improved generative flexibility via incremental text improvement, and compatibility with standard autoregressive models. We hope that DIFFUSER with spur research on edit-based generative models, with further potentials including how we can leverage edits to ensemble models (regardless of parameter count) in the discrete space. ACKNOWLEDGEMENTS We thank Armen Aghajanyan, Daniel Fried, Edison Marrese-Taylor, Eric Wallace, and Luke Zettlemoyer for their helpful comments in early discussions. We thank Ari Holtzman, Jungo Kasai, Aman Madaan, and Eric Wallace for feedback and proofreading the draft of this paper.
1. What is the focus and contribution of the paper on text generative tasks? 2. What are the strengths and weaknesses of the proposed DIFFUSER model? 3. Do you have any concerns or questions regarding the model's approach, experimental settings, or explanations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes DIFFUSER, a denoising diffusion model for text generative tasks. It treats text generation as a Markov chain of Levenshtein edit steps to denoise from the initial text. An editing step is modeled as an editing process in existing work (Reid & Neubig). The contribution of the paper is mainly 1) using the more flexible edit operations in diffusion models; 2) more thoughtful bootstrapping from autoregressive output or source text as initial text sequence; 3) employing 2D-beam search during decoding. Strengths And Weaknesses Pros: Well motivated, provided valuable exploration in the recent popular direction of diffusion models in NLP area. Very effective diffusion approach in NLP by adopting flexible editing operations. Strong results in experiments on multiple tasks and datasets, compared against strong baselines including recent literature on non-auto-regressive generation models. Effective techniques improved on top of the diffusion approach, including bootstrapping and 2D-beam. Ablation and analysis are also provided. Cons: There is novelty in the approach, but it is not considered significant, since there is already existing work on NLP diffusion models as described in Sec 2.3. The description of models and experiment settings is not clear enough. Table 4 could be moved to Appendix to make room for clearer and more detailed descriptions and explanations. Since the tasks are sequence to sequence generation, the Diffuser model can be looked on as the decoder, should there also be an encoder to take in the input text, so that the decoder can condition the output on the input? This was not described anywhere in the paper. How many diffusion steps, and what b and r in Summarization and Text Style Transfer tasks? Equation (3) is not explained clearly, including some symbols not annotated. There is no description of the Accuracy metric in Table 2. It would be helpful if some experiments and analysis are included for the number of diffusion steps in training. Numbers of training time and decoding time compared to baselines would be helpful, together with some discussion on the trade-off of efficiency and efficacy. Minor edit: Page 8: "comparsion" -> "comparison" Clarity, Quality, Novelty And Reproducibility The main approaches and contributions of the paper is clearly presented and well organized. However, more clarity is needed on some details in the descriptions of models and experiments. It did not affect the claim of the paper, though. The approaches and claims are solid and sound, backed up by experiment results. There is novelty in the approach, but it is not considered significant, since there are already existing work on NLP diffusion models as described in Sec 2.3. The authors stated that "Code and data to reproduce experiments will be released".
ICLR
Title Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples Abstract Exploration in reinforcement learning is, in general, a challenging problem. In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states. In this case, the reward function can be obtained automatically by training a classifier to classify states as successful or not. We argue that, with appropriate representation and regularization, such a classifier can guide a reinforcement learning algorithm to an effective solution. However, as we will show, this requires the classifier to make uncertainty-aware predictions that are very difficult with standard deep networks. To address this, we propose a novel mechanism for obtaining calibrated uncertainty based on an amortized technique for computing the normalized maximum likelihood distribution. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions from data, while also being able to guide algorithms towards the specified goal more effectively. We show how using amortized normalized maximum likelihood for reward inference is able to provide effective reward guidance for solving a number of challenging navigation and robotic manipulation tasks which prove difficult for other algorithms. 1 INTRODUCTION While reinforcement learning (RL) has been shown to successfully solve problems with careful reward design (Rajeswaran et al., 2018; OpenAI et al., 2019), RL in its most general form, with no assumptions on the dynamics or reward function, requires solving a challenging uninformed search problem in which rewards are sparsely observed. Techniques which explicitly provide “rewardshaping” (Ng et al., 1999), or modify the reward function to guide learning, can help take some of the burden off of exploration, but shaped rewards can be difficult to obtain without domain knowledge. In this paper, we aim to reformulate the reinforcement learning problem to make it easier for the user to specify the task and to provide a tractable reinforcement learning objective. Instead of requiring a reward function designed for an objective, our method instead assumes a user-provided set of successful outcome examples: states in which the desired task has been accomplished successfully. The algorithm aims to estimate the distribution over these states and maximize the probability of reaching states that are likely under the distribution. Prior work on learning from success examples (Fu et al., 2018b; Zhu et al., 2020) focused primarily on alleviating the need for manual reward design. In our work, we focus on the potential for this mode of task specification to produce more tractable RL problems and solve more challenging classes of tasks. Intuitively, when provided with explicit examples of successful states, the RL algorithm should be able to direct its exploration, rather than simply hope to randomly chance upon high reward states. The main challenge in instantiating this idea into a practical algorithm is performing appropriate uncertainty quantification in estimating whether a given state corresponds to a successful outcome. Our approach trains a classifier to distinguish successful states, provided by the user, from those generated by the current policy, analogously to generative adversarial networks (Goodfellow et al., 2014) and previously proposed methods for inverse reinforcement learning (Fu et al., 2018a). In general, such a classifier is not guaranteed to provide a good optimization landscape for learning the policy. We discuss how a particular form of uncertainty quantification based on the normalized maximum likelihood (NML) distribution produces better reward guidance for learning. We also connect our approach to count-based exploration methods, showing that a classifier with suitable uncertainty estimates reduces to a count-based exploration method in the absence of any generalization across states, while also discussing how it improves over count-based exploration in the presence of good generalization. We then propose a practical algorithm to train success classifiers in a computationally efficient way with NML, and show how this form of reward inference allows us to solve difficult problems more efficiently, providing experimental results which outperform existing algorithms on a number of navigation and robotic manipulation domains. 2 RELATED WORK A number of techniques have been proposed to improve exploration.These techniques either add reward bonuses that encourage a policy to visit novel states in a task-agnostic manner (Wiering and Schmidhuber, 1998; Auer et al., 2002; Schaul et al., 2011; Houthooft et al., 2016; Pathak et al., 2017; Tang et al., 2017; Stadie et al., 2015; Bellemare et al., 2016; Burda et al., 2018a; O’Donoghue, 2018) or perform Thompson sampling or approximate Thompson sampling based on a prior over value functions (Strens, 2000; Osband et al., 2013; 2016). While these techniques are uninformed about the actual task, we consider a constrained set of problems where examples of successes can allow for more task-directed exploration. In real world problems, designing well-shaped reward functions makes exploration easier but often requires significant domain knowledge (Andrychowicz et al., 2020), access to privileged information about the environment (Levine et al., 2016) and/or a human in the loop providing rewards (Knox and Stone, 2009; Singh et al., 2019b). Prior work has considered specifying rewards by providing example demonstrations and inferring rewards with inverse RL (Abbeel and Ng, 2004; Ziebart et al., 2008; Ho and Ermon, 2016; Fu et al., 2018a). This requires expensive expert demonstrations to be provided to the agent. In contrast, our work has the minimal requirement of simply providing successful outcome states, which can be done cheaply and more intuitively. This subclass of problems is also related to goal conditioned RL (Kaelbling, 1993; Schaul et al., 2015; Zhu et al., 2017; Andrychowicz et al., 2017; Nair et al., 2018; Veeriah et al., 2018; Rauber et al., 2018; Warde-Farley et al., 2018; Colas et al., 2019; Ghosh et al., 2019; Pong et al., 2020) but is more general, since it allows for a more abstract notion of task success. A core idea behind our work is using a Bayesian classifier to learn a suitable reward function. Bayesian inference with expressive models and high dimensional data can often be intractable, requiring assumptions on the form of the posterior (Hoffman et al., 2013; Blundell et al., 2015; Maddox et al., 2019). In this work, we build on the concept of normalized maximum likelihood (Rissanen, 1996; Shtar’kov, 1987), or NML, to learn Bayesian classifiers. Although NML is typically considered from the perspective of optimal coding (Grünwald, 2007; Fogel and Feder, 2018), we show how it can be used to learn success classifiers, and discuss its connections to exploration and reward shaping in RL. 3 PRELIMINARIES In this paper, we study a modified reinforcement learning problem, where instead of the standard reward function, the agent is provided with successful outcome examples. This reformulation not only provides a modality for task specification that may be more natural for users to provide in some settings (Fu et al., 2018b; Zhu et al., 2020; Singh et al., 2019a), but, as we will show, can also make learning easier. We also derive a meta-learned variant of the conditional normalized maximum likelihood (CNML) distribution for representing our reward function, in order to make evaluation tractable. We discuss background on successful outcome examples and CNML in this section. 3.1 REINFORCEMENT LEARNING WITH EXAMPLES OF SUCCESSFUL OUTCOMES We follow the framework proposed by Fu et al. (2018b) and assume that we are provided with a Markov decision process (MDP) without a reward function, given by M, where M = (S,A, T , γ, µ0), as well as successful outcome examples S+ = {sk+}Kk=1, which is a set of states in which the desired task has been accomplished. This formalism is easiest to describe in terms of the control as inference framework (Levine, 2018). The relevant graphical model in Figure 9 consists of states and actions, as well as binary success variables et which represent the occurrence of a particular event. The agent’s objective is to cause this event to occur (e.g., a robot that is cleaning the floor must cause the “floor is clean” event to occur). Formally, we assume that the states in S+ are sampled from the distribution p(st|et = True) – that is, states where the desired event has taken place. In this work, we focus on efficient methods for solving this reformulation of the RL problem, by utilizing a novel uncertainty quantification method to represent the distribution p(et|st). In practice, prior methods that build on this and similar reformulations of the RL problem (Fu et al., 2018b) derive an algorithm where the reward function in RL is produced by a classifier that estimates p(et = True|st). Following the adversarial inverse reinforcement learning (AIRL) derivation (Fu et al., 2018a; Finn et al., 2016), it is possible to show that the correct source of negative examples for training this classifier is the state distribution of the policy itself, π(s). This insight results in a simple algorithm: at each iteration of the algorithm, the policy is updated to maximize the current reward, given by log p(et = True|st), then samples from the policy are added to the set of negative examples S−, and the classifier is retrained on the original positive set S+ and the updated negative set S−. 3.2 CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD Our method builds on the principle of conditional normalized maximum likelihood (NML) (Rissanen and Roos, 2007; Grünwald, 2007; Fogel and Feder, 2018), which we review briefly. CNML is a method for performing k-way classification, given a model class Θ and a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, and has been shown to provide better calibrated predictions and uncertainty estimates with minimax regret guarantees (Bibas et al., 2019). To predict the class of a query point xq, CNML constructs k augmented datasets by adding xq with a different label in each datasets, which we write as D ∪ (xq, y = i), i ∈ (1, 2, ..., k). CNML then defines the class distribution by solving the maximum likelihood estimation problem at query time for each of these augmented datasets to convergence, and normalize the likelihoods as follows: pCNML(y = i|xq) = pθi(y = i|xq)∑k j=1 pθj (y = j|xq) , θi = arg max θ∈Θ E(x,y)∼D∪(xq,y=i)[log pθ(y|x)] (1) Intuitively, if xq is close to other datapoints in D, then the model will struggle to assign a high likelihood to labels that differ substantially from other nearby points. However, if xq is far from all datapoints in D, then the different augmented MLE problems can easily classify xq as an arbitrary class, providing us with a likelihood closer to uniform. We refer readers to Grünwald (2007) for an in-depth discussion. A major limitation of CNML is that it requires training an entire neural network to convergence on the entire augmented dataset every time we want to evaluate a test point’s class probabilities. We will address this issue in Section 5. 4 BAYESIAN SUCCESS CLASSIFIERS FOR REWARD INFERENCE Ideally, training a classifier with the policy samples as negative examples as described in Section 3.1 should yield a smooth decision boundary between the well-separated negative and positive examples. For example, Figure 2 depicts a simple 1-D scenario, where the agent starts at the left (s0) and the positive outcomes are at the right (s+) side of the environment. Since the positives are on the right and the negatives are on the left, one might expect a classifier to gradually increase its prediction of a success as we move to the right (Figure 2a), which would provide a dense reward signal for the policy to move to the right. However, this idealized scenario rarely happens in practice. With- out suitable regularization, the decision boundary between the positive and negative examples may not be smooth. In fact, the decision boundary of an optimal classifier may take on the form of a sharp boundary anywhere between the positive and negative examples in the early stages of training (Figure 2b). As a result, the classifier might provide little to no reward signal for the policy, since it can assign arbitrarily small probabilities to the states sampled from the policy. We note that this issue is not pathological: our experiments in Section 6 show that this poor reward signal issue happens in practice and can greatly hinder learning. In this section, we will discuss how an appropriate classifier training method can avoid these uninformative rewards. 4.1 REGULARIZED SUCCESS CLASSIFIERS VIA NORMALIZED MAXIMUM LIKELIHOOD Algorithm 1 RL with CNML-Based Success Classifiers 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Add on-policy examples to S− by executing π. 5: Sample ntest points from S+ (label 1) and ntest points from S− (label 0) to construct a dataset D 6: Assign state rewards as r(s) = pCNML(e = 1|s,D) 7: Train π with RL algorithm To create effective shaping, we would like our classifier to provide a more informative reward when evaluated at rarely visited states that lie on the path to successful outcomes. A more informative reward function is one that assigns higher rewards to the fringe of the states visited by the policy, because this will encourage the policy to explore and move towards the desired states. We can construct such a reward function by imposing the prior that novel states have a non-negligible chance of being a success state. To do so, we train a Bayesian classifier using conditional normalized maximum likelihood (CNML) (Shtar’kov, 1987), as we described in Section 3, which corresponds to imposing a uniform prior on the output class probabilities. To use CNML for reward inference, the procedure is similar to the one described in Section 3. We construct a dataset using the provided successful outcomes as positives and the on-policy samples as negatives. However, the label probabilities for RL are then produced by the CNML procedure described in Equation 1 to obtain the rewards r(s) = pCNML(e = 1|s). To illustrate how this affects reward assignment during learning, we visualize a potential assignment of rewards with a CNMLbased classifier on the problem described earlier. When the success classifier is trained with CNML instead of standard maximum likelihood, intermediate unseen states would receive non-zero rewards rather than simply having vanishing likelihoods like in Figure 2b. The didactic illustrations in Fig 2c and Fig 2d show how the rewards obtained via NML might incentivize exploration. In fact, the CNML likelihood corresponds to a form of count-based exploration (as we show below), while also providing more directed shaping towards the goal when generalization exists across states. 4.2 RELATIONSHIP TO COUNT-BASED EXPLORATION In this section we relate the success likelihoods obtained via CNML to commonly used exploration methods based on counts. Formally, we prove that the success classifier trained with CNML is equivalent to a version of count-based exploration with a sparse reward function in the absence of any generalization across states (i.e., a fully tabular setting). Theorem 4.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have an independent parameter for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the set of goal examples. Then the CNML success probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are not represented in the goal examples, i.e. G(s) = 0, we then recover inverse counts 1N(s)+2 . Refer to Appendix A.7 for a full proof. 4.3 REWARD SHAPING WITH BAYESIAN SUCCESS CLASSIFIERS While the analysis above suggests that a CNML classifier would give us something akin to a sparse reward plus an exploration bonus, the structure of the problem and the state space actually provides us more information to guide us towards the goal. In most environments (Brockman et al., 2016; Yu et al., 2019) the state space does not consist of independent and uncorrelated categorical variables, and is instead provided in a representation that relates at least roughly to the dynamics structure in the environment. For instance, states close to the goal dynamically are also typically close to the goal in the metric space defined by the states. Indeed, this observation is the basis of many commonly used heuristic reward shaping methods, such as rewards given by Euclidean distance to target states. In this case, the task specification can actually provide more information than simply performing uninformed count-based exploration. Since the uncertainty-aware classifier described in Section 4.1 is built on top of features that are correlated with environment dynamics, and is trained with knowledge of the desired outcomes, it is able to incentivize task-aware directed exploration. As compared to CNML without generalization in Fig 2c, we expect the intermediate rewards to provide more shaping towards the goal. This phenomenon is illustrated intuitively in Fig 2d, and visualized and demonstrated empirically in our experimental analysis in Section 6, where BayCRL is able to significantly outperform methods for task-agnostic exploration. 4.4 OVERVIEW In this section, we introduced the idea of Bayesian classifiers trained via CNML as a means to provide rewards for RL problems specified by examples of successful outcomes. Concretely, a CNML-based scheme has the following advantages: • Natural exploration behavior due to accurate uncertainty estimation in the output success probabilities. This is explained by the connection between CNML and count-based exploration in the discrete case, and benefits from additional generalization in practical environments, as we will see in Section 6. • Better reward shaping by utilizing goal examples to guide the agent more quickly and accurately towards the goal. We have established this benefit intuitively, and will validate it empirically through extensive visualizations and experiments in Section 6. 5 BAYCRL: TRAINING BAYESIAN SUCCESS CLASSIFIERS FOR OUTCOME DRIVEN RL VIA META-LEARNING AND CNML In Section 4, we discussed how Bayesian success classifiers can incentivize exploration and provide reward shaping to guide RL. However, the reward inference technique via CNML described in Section 4.1 is computationally intractable, as it requires optimizing maximum likelihood estimation problems to convergence on every data point we want to query. In this section, we describe a novel approximation that allows us to instantiate this method in practice. 5.1 META-LEARNING FOR CNML We adopt ideas from meta-learning to amortize the cost of obtaining the CNML distribution. As noted in Section 4.1, the computation of the CNML distribution involves repeatedly solving maximum likelihood problems. While computationally daunting, these problems share a significant amount of common structure, which we can exploit to quickly obtain CNML estimates. One set of techniques that are directly applicable is meta-learning for few shot classification. Meta-learning uses a distribution of training problems to explicitly learn models that can quickly adapt to new problems. To apply meta-learning to the CNML problem, we can formulate each of the maximum likelihood problems described in Equation 1 as a separate task for meta-learning, and apply any standard meta-learning technique to obtain a model capable of few-shot adaptation to the MLE problems required for CNML. While any meta-learning algorithm is applicable, we found model agnostic meta-learning (MAML)(Finn et al. (2017)) to be an effective choice of algorithm. In short, MAML tries to learn a model that can quickly adapt to new tasks via a few steps of gradient descent. This procedure is illustrated in Fig 10, and can be described as follows: given a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, 2n different tasks τi can be constructed, each corresponding to performing maximum likelihood estimation on the dataset with a certain proposed label for xi: maxθ E(x,y)∼D∪(xi,y=0)[log p(y|x, θ)] or maxθ E(x,y)∼D∪(xi,y=1)[log p(y|x, θ)]. Given these constructed tasks S(τ), meta-training as described in Finn et al. (2017): max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ). (2) This training procedure gives us parameters θ that can then be quickly adapted to provide the CNML distribution simply by performing a step of gradient descent. The model can be queried for the CNML distribution by starting from θ and taking one step of gradient descent for the query point augmented dataset, each with a different potential label. These likelihoods are then normalized to provide the CNML distribution as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)]. (3) This algorithm, which we call meta-NML, allows us to obtain normalized likelihood estimates without having to retrain maximum likelihood to convergence at every single query point, since the model can now solve maximum likelihood problems of this form very quickly. A complete detailed description and pseudocode of this algorithm are provided in Appendix A.2. Feedforward Meta-NML Naive CNML Single input point 0.0004s 0.0090s 15.19s Epoch of RL 23.50s 39.05s 4hr 13min 34s This makes it several orders of magnitude faster than naive CNML, which would normally require multiple passes through the entire dataset on each input point in order to train to convergence. 5.2 APPLYING META-NML TO SUCCESS CLASSIFICATION Algorithm 2 BayCRL: Bayesian Classifiers for RL 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Collect on-policy examples to add to S− by executing π. 5: if iteration i mod k == 0 then 6: Sample ntrain states from S− to create 2ntrain metatraining tasks 7: Sample ntest total test points equally from S+ (label 1) and S− (label 0) 8: Meta-train θR via meta-NML using Equation 2 9: Assign state rewards via Equation 4 10: Train π with RL algorithm We apply the meta-NML algorithm described above to learning Bayesian success classifiers for providing rewards for reinforcement learning, in our proposed algorithm, which we term BayCRL— Bayesian classifiers for reinforcement learning. Similarly to Fu et al. (2018b), we can train our Bayesian classifier by first constructing a dataset D for binary classification. This is done by using the provided examples of successful outcomes as positives, and on-policy examples collected by the policy as negatives, balancing the number of sam- pled positives and negatives in the dataset. Given this dataset, the Bayesian classifier parameters θR can be trained via meta-NML as described in Equation 2. The classifier can then be used to directly and quickly assign rewards to a state s according to its probabilities r(s) = pmeta-NML(e = 1|s) (via a step of gradient descent, as described in Equation 4), and perform standard reinforcement learning. pmeta-NML(e = 1|s) = pθ1(e = 1|s)∑ i∈{0,1} pθi(e = i|s) (4) θi = θR − α∇θE(sj ,ej)∼D∪(s,e=i)[L(ej , sj , θ)], for i ∈ {0, 1} (5) An overview of this algorithm is provided in Algorithm 2, and full details are in Appendix A.2. The rewards start off at an uninformative value of 0.5 for all unvisited states at the beginning, and close to 1 for successful outcomes. As training progresses, more states are visited, added to the buffer and BayCRL starts to assign them progressively lower reward as they get visited more and more, thereby encouraging visiting of under-visited states. At convergence, all the non successful states will have a reward of close to 0 and states at the goal will have a reward of 0.5, since the numbers of positive and negative labels for successful outcomes will be balanced as described above. 6 EXPERIMENTAL EVALUATION In our experimental evaluation we aim to answer the following questions: (1) Do the learning dynamics of prior classifier-based reward learning methods provide informative rewards for RL? (2) Does using BayCRL help address the exploration challenge when solving RL problems specified by successful outcomes? (3) Does using BayCRL help provide better reward shaping than simply performing naïvely uninformed exploration? To evaluate these questions, we evaluate our proposed algorithm BayCRL with the following setup. Further details and videos can be found at https://sites.google.com/view/baycrl/home 6.1 EXPERIMENTAL SETUP We start off by understanding the algorithm behavior by evaluating it on maze navigation problems, which require avoiding several local optima before truly reaching the goal. Then, to evaluate our method in more complex domains, we consider three robotic manipulation tasks that were previously covered in Singh et al. (2019a) with a Sawyer robot arm: door opening, tabletop object pushing, and 3D object picking. As we show in our results, exploration in these environments is challenging and using naively chosen reward shaping often does not solve the problem at hand. More details on each environment and their associated challenges are available in Appendix A.4.1. We compare with a number of prior algorithms and ablations. To provide a comparison with a standard previous method which uses success classifiers trained with an IRL-based adversarial method, we include the VICE algorithm (Fu et al., 2018b). Note that this algorithm is quite related to BayCRL, but it uses a standard maximum likelihood classifier rather than a Bayesian classifier trained with CNML and meta-learning. We also include a comparison with DDL, a recently proposed technique for learning dynamical distances (Hartikainen et al., 2019). We additionally include comparisons to algorithms for uninformed exploration to show that BayCRL does a more directed form of exploration and reward shaping. To provide an apples-to-apples comparison, we use the same VICE method for training classifiers, but combine it with novelty-based exploration based on random network distillation (Burda et al., 2018b) for the robotic manipulation tasks, and oracle inverse count bonuses for the maze navigation tasks. Finally, to demonstrate the importance of well-shaped rewards, we compare to running Soft Actor-Critic (Haarnoja et al., 2018), a standard RL algorithm for continuous domains, with two naive reward functions: a sparse reward at the goal, and a heuristically shaped reward which uses L2 distance to the goal state. More details on each algorithm and the hyperparameters used are included in Appendix A.6. 6.2 COMPARISONS WITH PRIOR ALGORITHMS We compare with prior algorithms on the domains described above. As we can see in Fig 5, BayCRL is able to very quickly learn how to solve these challenging exploration tasks, often reaching better asymptotic performance than most prior methods, and doing so more efficiently than VICE (Fu et al., 2018b) or DDL (Hartikainen et al., 2019). This suggests that BayCRL is able to provide directed reward shaping and exploration that is substantially better than standard classifier-based methods (e.g., VICE). To isolate whether the benefits purely come from exploration or also from task-aware reward shaping, we compare with methods that only perform uninformed, task-agnostic exploration. On the maze environments, where we can discretize the state space, we compute ground truth count-based bonuses for exploration. For the higher dimensional robotics tasks, we use RND (Burda et al., 2018b). From these comparisons, shown in Fig 5, it is clear that BayCRL significantly outperforms methods that use novelty-seeking exploration, but do not otherwise provide effective reward shaping. In combination with our visualizations in Section 6.4, this suggests that BayCRL is providing useful task-aware reward shaping more effectively than uniformed exploration methods. We also compare BayCRL to a manually heuristically-designed shaped reward function, based on Euclidean distance. As shown in Fig 5, BayCRL generally outperforms simple manual shaping in terms of sample complexity and asymptotic performance, indicating that the learned shaping is non-trivial and adapted to the task. 6.3 ABLATIONS We first evaluate the importance of meta-learning for estimating the NML distribution. In Figure 6, we see that naively estimating the NML distribution by taking a single gradient step and following the same process as evaluating meta-NML, but without any meta-training, results in much worse performance. Second, we analyze the importance of making the BayCRL classifier aware of the task being solved, to understand whether BayCRL is informed by the success examples or simply approximates count-based exploration. To that end, we modify the training procedure so that the dataset D consists of only the on-policy negatives, and add the inferred reward from the Bayesian classifier to the reward obtained by a standard MLE classifier (similarly to the VICE+RND baseline). We see that this performs poorly, showing that the BayCRL classifier is doing more than just performing count-based exploration, and benefits from better reward shaping due to the provided goal examples. Further ablations are available in Appendix A.5. 6.4 ANALYSIS OF BAYCRL BayCRL and Reward Shaping. To better understand how BayCRL provides reward shaping, we visualize the rewards for various slices along the z axis on the Sawyer Pick task, an environment which presents a significant exploration challenge. In Fig 7 we see that the BayCRL rewards clearly correlate with the distance to the object’s goal position, shown as a white star, thus guiding the robot to raise the ball to the desired location even if it has never reached the goal before. In contrast, the MLE classifier has a sharp, poorly-shaped decision boundary. BayCRL and Exploration. Next, to illustrate the connection between BayCRL and exploration, we compare the states visited by BayCRL (which uses a meta-NML classifier) and by VICE (which uses a standard L2-regularized classifier) in Figure 8. We see that BayCRL naturally incentivizes the agent to visit novel states, allowing it to navigate around local minima and reach the true goal. In contrast, VICE learns a misleading reward function that prioritizes closeness to the goal in xy space, causing the agent to stay on the wrong side of the wall. Interestingly, despite incentivizing exploration, BayCRL does not simply visit all possible states; at convergence, it has only covered around 70% of the state space. In fact, we see in the scatterplots in Figure 8 that BayCRL prioritizes states that bring it closer to the goal and ignores ones that don’t, thus making use of the goal examples provided to it. This suggests that BayCRL benefits from a combination of novelty-seeking behavior and effective reward shaping, allowing it to choose new states strategically. 7 DISCUSSION In this work, we consider a subclass of reinforcement learning problems where examples of successful outcomes specify the task. We analyze how solutions via standard success classifiers suffer from shortcomings, and training Bayesian classifiers allows for better exploration to solve challenging problems. We discuss how the NML distribution can provide us a way to train such Bayesian classifiers, providing benefits of exploration and reward shaping. To make learning tractable, we propose a novel meta-learning approach to amortize the NML process. While this work has shown the effectiveness of Bayesian classifiers for reward inference for tasks in simulation, it would be interesting to scale this solution to real world problems. Additionally, obtaining a theoretical understanding of how reward shaping interacts with learning dynamics would be illuminating in designing reward schemes. A APPENDIX A.1 GRAPHICAL MODEL FOR CONTROL AS INFERENCE A.2 DETAILED DESCRIPTION OF META-NML We provide a detailed description of the meta-NML algorithm described in Section 5, and the details of the practical algorithm. Given a dataset D = {(x0, y0), (x1, y1), .., (xn, yn)}, the meta-NML procedure proceeds by first constructing k ∗ n tasks from these data points, for a k shot classification problem. We will keep k = 2 for simplicity in this description, in accordance with the setup of binary success classifiers in RL. Each task τi is constructed by augmenting the dataset with a negative label D ∪ (xi, y = 0) or a positive label D ∪ (xi, y = 1). Now that each task consists of solving the maximum likelihood problem for its augmented dataset, we can directly apply standard meta-learning algorithms to this setting. Building off the ideas in MAML (Finn et al., 2017), we can then train a set of model parameters θ such that after a single step of gradient descent it can quickly adapt to the optimal solution for the MLE problem on any of the augmented datasets. This is more formally written as max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ) (6) where L represents a standard classification loss function, α is the learning rate, and the distribution of tasks p(τ) is constructed as described above. For a new query point x, these initial parameters can then quickly be adapted to provide the CNML distribution by taking a gradient step on each augmented dataset to obtain the approximately optimal MLE solution, and normalizing these as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)] This algorithm in principle can be optimized using any standard stochastic optimization method such as SGD, as described in Finn et al. (2017), backpropagating through the inner loop gradient update. For the specific problem setting that we consider, we have to employ some optimization tricks in order to enable learning: A.2.1 IMPORTANCE WEIGHTING ON QUERY POINT Since only one datapoint is augmented to the training set at query time for CNML, it can get challenging for stochastic gradient descent to pay attention to this datapoint with increasing dataset sizes. For example, if we train on an augmented dataset of size 2048 by cycling through it in batch sizes of 32, then only 1 in 64 batches would include the query point itself and allow the model to adapt to the proposed label, while the others would lead to noise in the optimization process, potentially worsening the model’s prediction on the query point. In order to make sure the optimization considers the query point, we include the query point and proposed label (xq, y) in every minibatch that is sampled, but downweight the loss computed on that point such that the overall objective remains unbiased. This is simply doing importance weighting, with the query point downweighted by a factor of d b−1N e where b is the desired batch size and N is the total number of points in the original dataset. To see why the optimization objective remains the same, we can consider the overall loss over the dataset. Let fθ be our classifier, L be our loss function, D′ = {(xi, yi)}Ni=1 ∪ (xq, y) be our augmented dataset, and Bk be the kth batch seen during training. Using standard SGD training that cycles through batches in the dataset, the overall loss on the augmented dataset would be: L(D′) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) If we instead included the downweighted query point in every batch, the overall loss would be: L(D′) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk ( L(fθ(xi), yi) + 1 d b−1N e L(fθ(xq), y) ) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk L(fθ(xi), yi) + db− 1 N e 1 d b−1N e L(fθ(xq), y) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) which is the same objective as before. This trick has the effect of still optimizing the same max likelihood problem required by CNML, but significantly reducing the variance of the query point predictions as we take additional gradient steps at query time. As a concrete example, consider querying a meta-CNML classifier on the input shown in Figure 11. If we adapt to the augmented dataset without including the query point in every batch (i.e. without importance weighting), we see that the query point loss is significantly more unstable, requiring us to take more gradient steps to converge. A.2.2 KERNEL WEIGHTED TRAINING LOSS The augmented dataset consists of points from the original datasetD and one augmented point (xq, y). Given that we mostly care about having the proper likelihood on the query point, with an imperfect optimization process, the meta-training can yield solutions that are not very accurately representing true likelihoods on the query point. To counter this, we introduce a kernel weighting into the loss function in Equation 6 during meta-training and subsequently meta-testing. The kernel weighting modifies the training loss function as: max θ Eτ∼S(τ)[E(x,y)∼τK(x, xτ )L(x, y, θ′)], s.t θ′ = θ−α∇θE(x,y)∼τK(x, xτ )L(x, y, θ) (7) where xτ is the query point for task τ and K is a choice of kernel. We typically choose exponential kernels centered around xτ . Intuitively, this allows the meta-optimization to mainly consider the datapoints that are copies of the query point in the dataset, or are similar to the query point, and ensures that they have the correct likelihoods, instead of receiving interfering gradient signals from the many other points in the dataset. To make hyperparameter selection intuitive, we designate the strength of the exponential kernel by a parameter λdist, which is the Euclidean distance away from the query point at which the weight becomes 0.1. Formally, the weight of a point x in the loss function for query point xτ is computed as: K(x, xτ ) = exp {− 2.3 λdist ||x− xτ ||2} (8) A.2.3 META-TRAINING AT FIXED INTERVALS While in principle meta-NML would retrain with every new datapoint, in practice we retrain metaNML once every k epochs. (In all of our experiments we set k = 1, but we could optionally increase k if we do not expect the meta-task distribution to change much between epochs.) We warm-start the meta-learner parameters from the previous iteration of meta-learning, so every instance of meta-training only requires a few steps. We find that this periodic training is a reasonable enough approximation, as evidenced by the strong performance of BayCRL in our experimental results in Section 6. A.3 META-NML VISUALIZATIONS A.3.1 META-NML WITH ADDITIONAL GRADIENT STEPS Below, we show a more detailed visualization of meta-NML outputs on data from the Zigzag Maze task, and how these outputs change with additional gradient steps. For comparison, we also include the idealized NML rewards, which come from a discrete count-based classifier. Meta-NML is able to resemble the ideal NML rewards fairly well with just 1 gradient step, providing both an approximation of a count-based exploration bonus and better shaping towards the goal due to generalization. By taking additional gradient steps, meta-NML can get arbitrarily close to the true NML outputs, which themselves correspond to inverse counts of 1n+2 as explained in Theorem 4.1. While this would give us more accurate NML estimates, in practice we found that taking one gradient step was sufficient to achieve good performance on our RL tasks. A.3.2 COMPARISON OF REWARD CLASSIFIERS A.3.3 RUNTIME COMPARISONS Below provide the runtimes for feedforward inference, naive CNML, and meta-NML on each of our evaluation domains. We list both the runtimes for evaluating a single input, and for completing a full epoch of training during RL. These benchmarks were performed on an NVIDIA Titan X Pascal GPU. Per-input runtimes are averaged across 100 samples, and per-epoch runtimes are averaged across 20 epochs. A.4 EXPERIMENTAL DETAILS A.4.1 ENVIRONMENTS Zigzag Maze and Spiral Maze: These two navigation tasks require moving through long corridors and avoiding several local optima in order to reach the goal. For example, on Spiral Maze, the agent must not get stuck on the other side of the inner wall, even though that position would be close in L2 distance to the desired goal. On these tasks, a sparse reward is not informative enough for learning, while ordinary classifier methods get stuck in local optima due to poor shaping near the goal. Both of these environments have a continuous state space consisting of the (x, y) coordinates of the agent, ranging from (−4,−4) to (4, 4) inclusive. The action space is the desired velocity in the x and y directions, each ranging from −1 to 1 inclusive. Sawyer 2D Pusher: This task involves using a Sawyer arm, constrained to move only in the xy plane, to push a randomly initialized puck to a fixed location on a table. The state space consists of the (x, y, z) coordinates of the robot end effector and the (x, y) coordinates of the puck. The action space is the desired x and y velocities of the arm. Sawyer Door Opening: In this task, the Sawyer arm is attached to a hook, which it must use to open a door to a desired angle of 45 degrees. The door is randomly initialized each time to be at a starting angle of between 0 and 15 degrees. The state space consists of the (x, y, z) coordinates of the end effector and the door angle (in radians); the action space consists of (x, y, z) velocities. Sawyer 3D Pick and Place: The Sawyer robot must pick up a ball, which is randomly placed somewhere on the table each time, and raise it to a fixed (x, y, z) location high above the table. This represents the biggest exploration challenge out of all the manipulation tasks, as the state space is large and the agent would normally not receive any learning signal unless it happened to pick up the ball and raise it, which is unlikely without careful reward shaping. The state space consists of the (x, y, z) coordinates of the end effector, the (x, y, z) coordinates of the ball, and the tightness of the gripper (a continuous value between 0 and 1). The robot can control its (x, y, z) arm velocity as well as the gripper value. A.4.2 GROUND TRUTH DISTANCE METRICS In addition to the success rate plots in Figure 5, we provide plots of each algorithm’s distance to the goal over time according to environment-specific distance metrics. The distance metrics and success thresholds, which were used to compute the success rates in Figure 5, are listed in the table below. A.5 ADDITIONAL ABLATIONS A.5.1 LEARNING IN A DISCRETE, RANDOMIZED ENVIRONMENT In practice, many continuous RL environments such as the ones we consider in Section 6 have state spaces that are correlated at least roughly with the dynamics. For instance, states that are closer together dynamically are also typically closer in the metric space defined by the states. This correlation does not need to be perfect, but as long as it exists, BayCRL can in principle learn a smoothly shaped reward towards the goal. However, even in the case where states are unstructured and completely lack identity, such as in a discrete gridworld environment, the CNML classifier would still reduce to providing an explorationcentric reward bonus, as indicated by Theorem 4.1, ensuring reasonable worst-case performance. To demonstrate this, we evaluate BayCRL on a variant of the Zigzag Maze task where states are first discretized to a 16 × 16 grid, then "shuffled" so that the xy representation of a state does not correspond to its true coordinates and the states are not correlated dynamically. BayCRL manages to solve the task, while a standard classifier method (VICE) does not. Still, BayCRL is more effective in the original state space where generalization is possible, suggesting that both the exploration and reward shaping abilities of the CNML classifier are crucial to its overall performance. A.5.2 FINDING "HIDDEN" REWARDS NOT INDICATED BY SUCCESS EXAMPLES The intended setup for BayCRL (and classifier-based RL algorithms in general) is to provide a set of success examples to learn from, thus removing the need for a manually specified reward function. However, here we instead consider the case where a ground truth reward function exists which we do not fully know, and can only query through interaction with the environment. In this case, because the human expert has limited knowledge, the provided success examples may not cover all regions of the state space with high reward. An additional advantage of BayCRL is that it is still capable of finding these "unspecified" goals because of its built-in exploration behavior, whereas other classifier methods would operate solely based on the goal examples provided. To see this, we evaluate our algorithm on a two-sided variant of the Zigzag Maze with multiple goals, visualized in Figure 17 to the right. The agent starts in the middle and is provided with 5 goal examples on the far left side of the maze; unknown to it, the right side contains 5 sparse reward regions which are actually closer from its initial position. As shown in Figures 18 and 19, BayCRL manages to find the sparse rewards while other methods do not. BayCRL, although initially guided towards the provided goal examples on the left, continues to explore in both directions and eventually finds the "hidden" rewards on the right. Meanwhile, VICE focuses solely on the provided goals, and gets stuck in a local optima near the bottom left corner. A.6 HYPERPARAMETER AND IMPLEMENTATION DETAILS We describe the hyperparameter choices and implementation details for our experiments here. We first list the general hyperparameters that were shared across runs, then provide tables of additional hyperparameters we tuned over for each domain and algorithm. Goal Examples: For the classifier-based methods in our experiments (VICE and BayCRL), we provide 150 goal examples for each environment at the start of training. These are used as the pool of positive examples when training the success classifier. DDL Reward: We use the version of DDL proposed in Hartikainen et al. (2019) where we provide the algorithm with the ground truth goal state g, then run SAC with a reward function of r(s) = −dπ(s,g), where dπ is the learned dynamical distance function for the policy at the current iteration of training. A.6.2 SPIRAL MAZE HYPERPARAMETERS A.6.1 ZIGZAG MAZE HYPERPARAMETERS A.6.4 SAWYER PICK-AND-PLACE HYPERPARAMETERS A.6.3 SAWYER PUSH HYPERPARAMETERS A.6.5 SAWYER DOOR OPENING HYPERPARAMETERS A.7 PROOF OF THEOREM 1 CONNECTING NML AND INVERSE COUNTS We provide the proof of Theorem 1 here for completeness. Theorem A.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have a separate parameter independently for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the successful outcomes. Then the CNML probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are never observed to be successful, we then recover inverse counts 1N(s)+2 . Proof. In the fully tabular setting, our MLE estimates for p(O|s) are simply given by finding the best parameter ps for each state. The proof then proceeds by simple calculation. For a state with n = N(s) negative occurrences and g = G(s) positive occurrences, the MLE estimate is simply given by gn+g . Now for evaluating CNML, we consider appending another instance for each class. The new parameter after appending a negative example is then gn+g+1 , which then assigns probability n+1 n+g+1 to the negative class. Similarly, after appending a positive example, the new parameter is g+1n+g+1 , so we try to assign probability g+1n+g+1 to the positive class. Normalizing, we have pCNML(O = 1|s) = g + 1 n+ g + 2 . (9) When considering states that have only been visited on-policy, and are not included in the set of successful outcomes, then the likelihood reduces to pCNML(O = 1|s) = 1 n+ 2 . (10)
1. What is the focus of the paper regarding learning a policy for an MDP with unspecified reward? 2. What are the strengths of the proposed approach, particularly in using CNML and meta-learning? 3. Do you have any concerns or questions regarding the effectiveness and feasibility of the algorithm? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any specific points in the paper that the reviewer finds unclear or confusing?
Review
Review This paper considers the problem of learning a policy for an MDP with unspecified reward, given user-provided goal states. To this end, a reward model and a policy are jointly learned: the reward model is the conditional normalized maximum likelihood (CNML) learned from a training set consisting of the example goal states as positive examples, and the policy trajectories as negative examples; the policy is trained to optimize the MDP using the learned reward. Meta-learning is applied to reduce the cost of learning the CNML models. Pros The idea of using CNML to obtain a smoother reward as compared to a single model (together with the efficient meta-learning approximation) is interesting. The algorithm is compared with several baselines and seem to perform well. Ablation study suggests that goal examples and meta-learning are important for the proposed approach. Cons The paper is unnecessarily hard to read. A high-level description of the approach early in the paper will be helpful. This is also related to the comments below: it is not clear why the algorithm should work. A claimed contribution of the paper is to "produce more tractable RL problems and solve more challenging classes of tasks". The limited feedback provided by the example goal states perhaps make the RL problem more tractable, but how is it possible to solve more challenging classes of problems when less information is available? To learn a useful reward model from the example goal states, the CNML approach alone seems insufficient, and it seems necessary to require a good reward to be a smooth function of feature vectors. For example, if we work in a grid world with random rewards, does the approach still work? Both the paragraph before Sec 3.2 and Alg 1 mention that the set of negative examples keeps growing. This implies that the reward model will become more and more sparser (values closer to zero), even for the goal states? How is such a reward model still useful? Another question about the reward model is that when the policy becomes better, it is more likely to reach the goal states, thus the goal states are more likely to be labeled as both positive and negative. Thus the reward model is more likely to assign lower reward to goal states when more training is done? Fig. 1 seems to be overstating the problem with MLE. What are the features used and what is the classifier model? If the feature is the real-valued position, and a regularized logistic regression model is used, then MLE will not produce such a sparse reward as in (b)? The experiments section should provide more details about the experimental setup: the choice of candidate classifier models, explanation of the baselines (e.g. Sparse Reward seems not mentioned in the text at all), detailed description of the performance evaluation metric. Is Manhattan distance to goal a sensible performance metric for maze navigation? Minor comments "OpenAI et al.": wrong citation format Define L in Eq. (2) Post-rebuttal After reading the rebuttal and other reviewers' comments, my score remained the same. The rebuttal helped to clarify some issues, but it is still not clear to me why the algorithm should work. I agree with other reviewers that a more careful revision of the paper, and a further analysis on the algorithm will be beneficial.
ICLR
Title Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples Abstract Exploration in reinforcement learning is, in general, a challenging problem. In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states. In this case, the reward function can be obtained automatically by training a classifier to classify states as successful or not. We argue that, with appropriate representation and regularization, such a classifier can guide a reinforcement learning algorithm to an effective solution. However, as we will show, this requires the classifier to make uncertainty-aware predictions that are very difficult with standard deep networks. To address this, we propose a novel mechanism for obtaining calibrated uncertainty based on an amortized technique for computing the normalized maximum likelihood distribution. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions from data, while also being able to guide algorithms towards the specified goal more effectively. We show how using amortized normalized maximum likelihood for reward inference is able to provide effective reward guidance for solving a number of challenging navigation and robotic manipulation tasks which prove difficult for other algorithms. 1 INTRODUCTION While reinforcement learning (RL) has been shown to successfully solve problems with careful reward design (Rajeswaran et al., 2018; OpenAI et al., 2019), RL in its most general form, with no assumptions on the dynamics or reward function, requires solving a challenging uninformed search problem in which rewards are sparsely observed. Techniques which explicitly provide “rewardshaping” (Ng et al., 1999), or modify the reward function to guide learning, can help take some of the burden off of exploration, but shaped rewards can be difficult to obtain without domain knowledge. In this paper, we aim to reformulate the reinforcement learning problem to make it easier for the user to specify the task and to provide a tractable reinforcement learning objective. Instead of requiring a reward function designed for an objective, our method instead assumes a user-provided set of successful outcome examples: states in which the desired task has been accomplished successfully. The algorithm aims to estimate the distribution over these states and maximize the probability of reaching states that are likely under the distribution. Prior work on learning from success examples (Fu et al., 2018b; Zhu et al., 2020) focused primarily on alleviating the need for manual reward design. In our work, we focus on the potential for this mode of task specification to produce more tractable RL problems and solve more challenging classes of tasks. Intuitively, when provided with explicit examples of successful states, the RL algorithm should be able to direct its exploration, rather than simply hope to randomly chance upon high reward states. The main challenge in instantiating this idea into a practical algorithm is performing appropriate uncertainty quantification in estimating whether a given state corresponds to a successful outcome. Our approach trains a classifier to distinguish successful states, provided by the user, from those generated by the current policy, analogously to generative adversarial networks (Goodfellow et al., 2014) and previously proposed methods for inverse reinforcement learning (Fu et al., 2018a). In general, such a classifier is not guaranteed to provide a good optimization landscape for learning the policy. We discuss how a particular form of uncertainty quantification based on the normalized maximum likelihood (NML) distribution produces better reward guidance for learning. We also connect our approach to count-based exploration methods, showing that a classifier with suitable uncertainty estimates reduces to a count-based exploration method in the absence of any generalization across states, while also discussing how it improves over count-based exploration in the presence of good generalization. We then propose a practical algorithm to train success classifiers in a computationally efficient way with NML, and show how this form of reward inference allows us to solve difficult problems more efficiently, providing experimental results which outperform existing algorithms on a number of navigation and robotic manipulation domains. 2 RELATED WORK A number of techniques have been proposed to improve exploration.These techniques either add reward bonuses that encourage a policy to visit novel states in a task-agnostic manner (Wiering and Schmidhuber, 1998; Auer et al., 2002; Schaul et al., 2011; Houthooft et al., 2016; Pathak et al., 2017; Tang et al., 2017; Stadie et al., 2015; Bellemare et al., 2016; Burda et al., 2018a; O’Donoghue, 2018) or perform Thompson sampling or approximate Thompson sampling based on a prior over value functions (Strens, 2000; Osband et al., 2013; 2016). While these techniques are uninformed about the actual task, we consider a constrained set of problems where examples of successes can allow for more task-directed exploration. In real world problems, designing well-shaped reward functions makes exploration easier but often requires significant domain knowledge (Andrychowicz et al., 2020), access to privileged information about the environment (Levine et al., 2016) and/or a human in the loop providing rewards (Knox and Stone, 2009; Singh et al., 2019b). Prior work has considered specifying rewards by providing example demonstrations and inferring rewards with inverse RL (Abbeel and Ng, 2004; Ziebart et al., 2008; Ho and Ermon, 2016; Fu et al., 2018a). This requires expensive expert demonstrations to be provided to the agent. In contrast, our work has the minimal requirement of simply providing successful outcome states, which can be done cheaply and more intuitively. This subclass of problems is also related to goal conditioned RL (Kaelbling, 1993; Schaul et al., 2015; Zhu et al., 2017; Andrychowicz et al., 2017; Nair et al., 2018; Veeriah et al., 2018; Rauber et al., 2018; Warde-Farley et al., 2018; Colas et al., 2019; Ghosh et al., 2019; Pong et al., 2020) but is more general, since it allows for a more abstract notion of task success. A core idea behind our work is using a Bayesian classifier to learn a suitable reward function. Bayesian inference with expressive models and high dimensional data can often be intractable, requiring assumptions on the form of the posterior (Hoffman et al., 2013; Blundell et al., 2015; Maddox et al., 2019). In this work, we build on the concept of normalized maximum likelihood (Rissanen, 1996; Shtar’kov, 1987), or NML, to learn Bayesian classifiers. Although NML is typically considered from the perspective of optimal coding (Grünwald, 2007; Fogel and Feder, 2018), we show how it can be used to learn success classifiers, and discuss its connections to exploration and reward shaping in RL. 3 PRELIMINARIES In this paper, we study a modified reinforcement learning problem, where instead of the standard reward function, the agent is provided with successful outcome examples. This reformulation not only provides a modality for task specification that may be more natural for users to provide in some settings (Fu et al., 2018b; Zhu et al., 2020; Singh et al., 2019a), but, as we will show, can also make learning easier. We also derive a meta-learned variant of the conditional normalized maximum likelihood (CNML) distribution for representing our reward function, in order to make evaluation tractable. We discuss background on successful outcome examples and CNML in this section. 3.1 REINFORCEMENT LEARNING WITH EXAMPLES OF SUCCESSFUL OUTCOMES We follow the framework proposed by Fu et al. (2018b) and assume that we are provided with a Markov decision process (MDP) without a reward function, given by M, where M = (S,A, T , γ, µ0), as well as successful outcome examples S+ = {sk+}Kk=1, which is a set of states in which the desired task has been accomplished. This formalism is easiest to describe in terms of the control as inference framework (Levine, 2018). The relevant graphical model in Figure 9 consists of states and actions, as well as binary success variables et which represent the occurrence of a particular event. The agent’s objective is to cause this event to occur (e.g., a robot that is cleaning the floor must cause the “floor is clean” event to occur). Formally, we assume that the states in S+ are sampled from the distribution p(st|et = True) – that is, states where the desired event has taken place. In this work, we focus on efficient methods for solving this reformulation of the RL problem, by utilizing a novel uncertainty quantification method to represent the distribution p(et|st). In practice, prior methods that build on this and similar reformulations of the RL problem (Fu et al., 2018b) derive an algorithm where the reward function in RL is produced by a classifier that estimates p(et = True|st). Following the adversarial inverse reinforcement learning (AIRL) derivation (Fu et al., 2018a; Finn et al., 2016), it is possible to show that the correct source of negative examples for training this classifier is the state distribution of the policy itself, π(s). This insight results in a simple algorithm: at each iteration of the algorithm, the policy is updated to maximize the current reward, given by log p(et = True|st), then samples from the policy are added to the set of negative examples S−, and the classifier is retrained on the original positive set S+ and the updated negative set S−. 3.2 CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD Our method builds on the principle of conditional normalized maximum likelihood (NML) (Rissanen and Roos, 2007; Grünwald, 2007; Fogel and Feder, 2018), which we review briefly. CNML is a method for performing k-way classification, given a model class Θ and a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, and has been shown to provide better calibrated predictions and uncertainty estimates with minimax regret guarantees (Bibas et al., 2019). To predict the class of a query point xq, CNML constructs k augmented datasets by adding xq with a different label in each datasets, which we write as D ∪ (xq, y = i), i ∈ (1, 2, ..., k). CNML then defines the class distribution by solving the maximum likelihood estimation problem at query time for each of these augmented datasets to convergence, and normalize the likelihoods as follows: pCNML(y = i|xq) = pθi(y = i|xq)∑k j=1 pθj (y = j|xq) , θi = arg max θ∈Θ E(x,y)∼D∪(xq,y=i)[log pθ(y|x)] (1) Intuitively, if xq is close to other datapoints in D, then the model will struggle to assign a high likelihood to labels that differ substantially from other nearby points. However, if xq is far from all datapoints in D, then the different augmented MLE problems can easily classify xq as an arbitrary class, providing us with a likelihood closer to uniform. We refer readers to Grünwald (2007) for an in-depth discussion. A major limitation of CNML is that it requires training an entire neural network to convergence on the entire augmented dataset every time we want to evaluate a test point’s class probabilities. We will address this issue in Section 5. 4 BAYESIAN SUCCESS CLASSIFIERS FOR REWARD INFERENCE Ideally, training a classifier with the policy samples as negative examples as described in Section 3.1 should yield a smooth decision boundary between the well-separated negative and positive examples. For example, Figure 2 depicts a simple 1-D scenario, where the agent starts at the left (s0) and the positive outcomes are at the right (s+) side of the environment. Since the positives are on the right and the negatives are on the left, one might expect a classifier to gradually increase its prediction of a success as we move to the right (Figure 2a), which would provide a dense reward signal for the policy to move to the right. However, this idealized scenario rarely happens in practice. With- out suitable regularization, the decision boundary between the positive and negative examples may not be smooth. In fact, the decision boundary of an optimal classifier may take on the form of a sharp boundary anywhere between the positive and negative examples in the early stages of training (Figure 2b). As a result, the classifier might provide little to no reward signal for the policy, since it can assign arbitrarily small probabilities to the states sampled from the policy. We note that this issue is not pathological: our experiments in Section 6 show that this poor reward signal issue happens in practice and can greatly hinder learning. In this section, we will discuss how an appropriate classifier training method can avoid these uninformative rewards. 4.1 REGULARIZED SUCCESS CLASSIFIERS VIA NORMALIZED MAXIMUM LIKELIHOOD Algorithm 1 RL with CNML-Based Success Classifiers 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Add on-policy examples to S− by executing π. 5: Sample ntest points from S+ (label 1) and ntest points from S− (label 0) to construct a dataset D 6: Assign state rewards as r(s) = pCNML(e = 1|s,D) 7: Train π with RL algorithm To create effective shaping, we would like our classifier to provide a more informative reward when evaluated at rarely visited states that lie on the path to successful outcomes. A more informative reward function is one that assigns higher rewards to the fringe of the states visited by the policy, because this will encourage the policy to explore and move towards the desired states. We can construct such a reward function by imposing the prior that novel states have a non-negligible chance of being a success state. To do so, we train a Bayesian classifier using conditional normalized maximum likelihood (CNML) (Shtar’kov, 1987), as we described in Section 3, which corresponds to imposing a uniform prior on the output class probabilities. To use CNML for reward inference, the procedure is similar to the one described in Section 3. We construct a dataset using the provided successful outcomes as positives and the on-policy samples as negatives. However, the label probabilities for RL are then produced by the CNML procedure described in Equation 1 to obtain the rewards r(s) = pCNML(e = 1|s). To illustrate how this affects reward assignment during learning, we visualize a potential assignment of rewards with a CNMLbased classifier on the problem described earlier. When the success classifier is trained with CNML instead of standard maximum likelihood, intermediate unseen states would receive non-zero rewards rather than simply having vanishing likelihoods like in Figure 2b. The didactic illustrations in Fig 2c and Fig 2d show how the rewards obtained via NML might incentivize exploration. In fact, the CNML likelihood corresponds to a form of count-based exploration (as we show below), while also providing more directed shaping towards the goal when generalization exists across states. 4.2 RELATIONSHIP TO COUNT-BASED EXPLORATION In this section we relate the success likelihoods obtained via CNML to commonly used exploration methods based on counts. Formally, we prove that the success classifier trained with CNML is equivalent to a version of count-based exploration with a sparse reward function in the absence of any generalization across states (i.e., a fully tabular setting). Theorem 4.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have an independent parameter for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the set of goal examples. Then the CNML success probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are not represented in the goal examples, i.e. G(s) = 0, we then recover inverse counts 1N(s)+2 . Refer to Appendix A.7 for a full proof. 4.3 REWARD SHAPING WITH BAYESIAN SUCCESS CLASSIFIERS While the analysis above suggests that a CNML classifier would give us something akin to a sparse reward plus an exploration bonus, the structure of the problem and the state space actually provides us more information to guide us towards the goal. In most environments (Brockman et al., 2016; Yu et al., 2019) the state space does not consist of independent and uncorrelated categorical variables, and is instead provided in a representation that relates at least roughly to the dynamics structure in the environment. For instance, states close to the goal dynamically are also typically close to the goal in the metric space defined by the states. Indeed, this observation is the basis of many commonly used heuristic reward shaping methods, such as rewards given by Euclidean distance to target states. In this case, the task specification can actually provide more information than simply performing uninformed count-based exploration. Since the uncertainty-aware classifier described in Section 4.1 is built on top of features that are correlated with environment dynamics, and is trained with knowledge of the desired outcomes, it is able to incentivize task-aware directed exploration. As compared to CNML without generalization in Fig 2c, we expect the intermediate rewards to provide more shaping towards the goal. This phenomenon is illustrated intuitively in Fig 2d, and visualized and demonstrated empirically in our experimental analysis in Section 6, where BayCRL is able to significantly outperform methods for task-agnostic exploration. 4.4 OVERVIEW In this section, we introduced the idea of Bayesian classifiers trained via CNML as a means to provide rewards for RL problems specified by examples of successful outcomes. Concretely, a CNML-based scheme has the following advantages: • Natural exploration behavior due to accurate uncertainty estimation in the output success probabilities. This is explained by the connection between CNML and count-based exploration in the discrete case, and benefits from additional generalization in practical environments, as we will see in Section 6. • Better reward shaping by utilizing goal examples to guide the agent more quickly and accurately towards the goal. We have established this benefit intuitively, and will validate it empirically through extensive visualizations and experiments in Section 6. 5 BAYCRL: TRAINING BAYESIAN SUCCESS CLASSIFIERS FOR OUTCOME DRIVEN RL VIA META-LEARNING AND CNML In Section 4, we discussed how Bayesian success classifiers can incentivize exploration and provide reward shaping to guide RL. However, the reward inference technique via CNML described in Section 4.1 is computationally intractable, as it requires optimizing maximum likelihood estimation problems to convergence on every data point we want to query. In this section, we describe a novel approximation that allows us to instantiate this method in practice. 5.1 META-LEARNING FOR CNML We adopt ideas from meta-learning to amortize the cost of obtaining the CNML distribution. As noted in Section 4.1, the computation of the CNML distribution involves repeatedly solving maximum likelihood problems. While computationally daunting, these problems share a significant amount of common structure, which we can exploit to quickly obtain CNML estimates. One set of techniques that are directly applicable is meta-learning for few shot classification. Meta-learning uses a distribution of training problems to explicitly learn models that can quickly adapt to new problems. To apply meta-learning to the CNML problem, we can formulate each of the maximum likelihood problems described in Equation 1 as a separate task for meta-learning, and apply any standard meta-learning technique to obtain a model capable of few-shot adaptation to the MLE problems required for CNML. While any meta-learning algorithm is applicable, we found model agnostic meta-learning (MAML)(Finn et al. (2017)) to be an effective choice of algorithm. In short, MAML tries to learn a model that can quickly adapt to new tasks via a few steps of gradient descent. This procedure is illustrated in Fig 10, and can be described as follows: given a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, 2n different tasks τi can be constructed, each corresponding to performing maximum likelihood estimation on the dataset with a certain proposed label for xi: maxθ E(x,y)∼D∪(xi,y=0)[log p(y|x, θ)] or maxθ E(x,y)∼D∪(xi,y=1)[log p(y|x, θ)]. Given these constructed tasks S(τ), meta-training as described in Finn et al. (2017): max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ). (2) This training procedure gives us parameters θ that can then be quickly adapted to provide the CNML distribution simply by performing a step of gradient descent. The model can be queried for the CNML distribution by starting from θ and taking one step of gradient descent for the query point augmented dataset, each with a different potential label. These likelihoods are then normalized to provide the CNML distribution as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)]. (3) This algorithm, which we call meta-NML, allows us to obtain normalized likelihood estimates without having to retrain maximum likelihood to convergence at every single query point, since the model can now solve maximum likelihood problems of this form very quickly. A complete detailed description and pseudocode of this algorithm are provided in Appendix A.2. Feedforward Meta-NML Naive CNML Single input point 0.0004s 0.0090s 15.19s Epoch of RL 23.50s 39.05s 4hr 13min 34s This makes it several orders of magnitude faster than naive CNML, which would normally require multiple passes through the entire dataset on each input point in order to train to convergence. 5.2 APPLYING META-NML TO SUCCESS CLASSIFICATION Algorithm 2 BayCRL: Bayesian Classifiers for RL 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Collect on-policy examples to add to S− by executing π. 5: if iteration i mod k == 0 then 6: Sample ntrain states from S− to create 2ntrain metatraining tasks 7: Sample ntest total test points equally from S+ (label 1) and S− (label 0) 8: Meta-train θR via meta-NML using Equation 2 9: Assign state rewards via Equation 4 10: Train π with RL algorithm We apply the meta-NML algorithm described above to learning Bayesian success classifiers for providing rewards for reinforcement learning, in our proposed algorithm, which we term BayCRL— Bayesian classifiers for reinforcement learning. Similarly to Fu et al. (2018b), we can train our Bayesian classifier by first constructing a dataset D for binary classification. This is done by using the provided examples of successful outcomes as positives, and on-policy examples collected by the policy as negatives, balancing the number of sam- pled positives and negatives in the dataset. Given this dataset, the Bayesian classifier parameters θR can be trained via meta-NML as described in Equation 2. The classifier can then be used to directly and quickly assign rewards to a state s according to its probabilities r(s) = pmeta-NML(e = 1|s) (via a step of gradient descent, as described in Equation 4), and perform standard reinforcement learning. pmeta-NML(e = 1|s) = pθ1(e = 1|s)∑ i∈{0,1} pθi(e = i|s) (4) θi = θR − α∇θE(sj ,ej)∼D∪(s,e=i)[L(ej , sj , θ)], for i ∈ {0, 1} (5) An overview of this algorithm is provided in Algorithm 2, and full details are in Appendix A.2. The rewards start off at an uninformative value of 0.5 for all unvisited states at the beginning, and close to 1 for successful outcomes. As training progresses, more states are visited, added to the buffer and BayCRL starts to assign them progressively lower reward as they get visited more and more, thereby encouraging visiting of under-visited states. At convergence, all the non successful states will have a reward of close to 0 and states at the goal will have a reward of 0.5, since the numbers of positive and negative labels for successful outcomes will be balanced as described above. 6 EXPERIMENTAL EVALUATION In our experimental evaluation we aim to answer the following questions: (1) Do the learning dynamics of prior classifier-based reward learning methods provide informative rewards for RL? (2) Does using BayCRL help address the exploration challenge when solving RL problems specified by successful outcomes? (3) Does using BayCRL help provide better reward shaping than simply performing naïvely uninformed exploration? To evaluate these questions, we evaluate our proposed algorithm BayCRL with the following setup. Further details and videos can be found at https://sites.google.com/view/baycrl/home 6.1 EXPERIMENTAL SETUP We start off by understanding the algorithm behavior by evaluating it on maze navigation problems, which require avoiding several local optima before truly reaching the goal. Then, to evaluate our method in more complex domains, we consider three robotic manipulation tasks that were previously covered in Singh et al. (2019a) with a Sawyer robot arm: door opening, tabletop object pushing, and 3D object picking. As we show in our results, exploration in these environments is challenging and using naively chosen reward shaping often does not solve the problem at hand. More details on each environment and their associated challenges are available in Appendix A.4.1. We compare with a number of prior algorithms and ablations. To provide a comparison with a standard previous method which uses success classifiers trained with an IRL-based adversarial method, we include the VICE algorithm (Fu et al., 2018b). Note that this algorithm is quite related to BayCRL, but it uses a standard maximum likelihood classifier rather than a Bayesian classifier trained with CNML and meta-learning. We also include a comparison with DDL, a recently proposed technique for learning dynamical distances (Hartikainen et al., 2019). We additionally include comparisons to algorithms for uninformed exploration to show that BayCRL does a more directed form of exploration and reward shaping. To provide an apples-to-apples comparison, we use the same VICE method for training classifiers, but combine it with novelty-based exploration based on random network distillation (Burda et al., 2018b) for the robotic manipulation tasks, and oracle inverse count bonuses for the maze navigation tasks. Finally, to demonstrate the importance of well-shaped rewards, we compare to running Soft Actor-Critic (Haarnoja et al., 2018), a standard RL algorithm for continuous domains, with two naive reward functions: a sparse reward at the goal, and a heuristically shaped reward which uses L2 distance to the goal state. More details on each algorithm and the hyperparameters used are included in Appendix A.6. 6.2 COMPARISONS WITH PRIOR ALGORITHMS We compare with prior algorithms on the domains described above. As we can see in Fig 5, BayCRL is able to very quickly learn how to solve these challenging exploration tasks, often reaching better asymptotic performance than most prior methods, and doing so more efficiently than VICE (Fu et al., 2018b) or DDL (Hartikainen et al., 2019). This suggests that BayCRL is able to provide directed reward shaping and exploration that is substantially better than standard classifier-based methods (e.g., VICE). To isolate whether the benefits purely come from exploration or also from task-aware reward shaping, we compare with methods that only perform uninformed, task-agnostic exploration. On the maze environments, where we can discretize the state space, we compute ground truth count-based bonuses for exploration. For the higher dimensional robotics tasks, we use RND (Burda et al., 2018b). From these comparisons, shown in Fig 5, it is clear that BayCRL significantly outperforms methods that use novelty-seeking exploration, but do not otherwise provide effective reward shaping. In combination with our visualizations in Section 6.4, this suggests that BayCRL is providing useful task-aware reward shaping more effectively than uniformed exploration methods. We also compare BayCRL to a manually heuristically-designed shaped reward function, based on Euclidean distance. As shown in Fig 5, BayCRL generally outperforms simple manual shaping in terms of sample complexity and asymptotic performance, indicating that the learned shaping is non-trivial and adapted to the task. 6.3 ABLATIONS We first evaluate the importance of meta-learning for estimating the NML distribution. In Figure 6, we see that naively estimating the NML distribution by taking a single gradient step and following the same process as evaluating meta-NML, but without any meta-training, results in much worse performance. Second, we analyze the importance of making the BayCRL classifier aware of the task being solved, to understand whether BayCRL is informed by the success examples or simply approximates count-based exploration. To that end, we modify the training procedure so that the dataset D consists of only the on-policy negatives, and add the inferred reward from the Bayesian classifier to the reward obtained by a standard MLE classifier (similarly to the VICE+RND baseline). We see that this performs poorly, showing that the BayCRL classifier is doing more than just performing count-based exploration, and benefits from better reward shaping due to the provided goal examples. Further ablations are available in Appendix A.5. 6.4 ANALYSIS OF BAYCRL BayCRL and Reward Shaping. To better understand how BayCRL provides reward shaping, we visualize the rewards for various slices along the z axis on the Sawyer Pick task, an environment which presents a significant exploration challenge. In Fig 7 we see that the BayCRL rewards clearly correlate with the distance to the object’s goal position, shown as a white star, thus guiding the robot to raise the ball to the desired location even if it has never reached the goal before. In contrast, the MLE classifier has a sharp, poorly-shaped decision boundary. BayCRL and Exploration. Next, to illustrate the connection between BayCRL and exploration, we compare the states visited by BayCRL (which uses a meta-NML classifier) and by VICE (which uses a standard L2-regularized classifier) in Figure 8. We see that BayCRL naturally incentivizes the agent to visit novel states, allowing it to navigate around local minima and reach the true goal. In contrast, VICE learns a misleading reward function that prioritizes closeness to the goal in xy space, causing the agent to stay on the wrong side of the wall. Interestingly, despite incentivizing exploration, BayCRL does not simply visit all possible states; at convergence, it has only covered around 70% of the state space. In fact, we see in the scatterplots in Figure 8 that BayCRL prioritizes states that bring it closer to the goal and ignores ones that don’t, thus making use of the goal examples provided to it. This suggests that BayCRL benefits from a combination of novelty-seeking behavior and effective reward shaping, allowing it to choose new states strategically. 7 DISCUSSION In this work, we consider a subclass of reinforcement learning problems where examples of successful outcomes specify the task. We analyze how solutions via standard success classifiers suffer from shortcomings, and training Bayesian classifiers allows for better exploration to solve challenging problems. We discuss how the NML distribution can provide us a way to train such Bayesian classifiers, providing benefits of exploration and reward shaping. To make learning tractable, we propose a novel meta-learning approach to amortize the NML process. While this work has shown the effectiveness of Bayesian classifiers for reward inference for tasks in simulation, it would be interesting to scale this solution to real world problems. Additionally, obtaining a theoretical understanding of how reward shaping interacts with learning dynamics would be illuminating in designing reward schemes. A APPENDIX A.1 GRAPHICAL MODEL FOR CONTROL AS INFERENCE A.2 DETAILED DESCRIPTION OF META-NML We provide a detailed description of the meta-NML algorithm described in Section 5, and the details of the practical algorithm. Given a dataset D = {(x0, y0), (x1, y1), .., (xn, yn)}, the meta-NML procedure proceeds by first constructing k ∗ n tasks from these data points, for a k shot classification problem. We will keep k = 2 for simplicity in this description, in accordance with the setup of binary success classifiers in RL. Each task τi is constructed by augmenting the dataset with a negative label D ∪ (xi, y = 0) or a positive label D ∪ (xi, y = 1). Now that each task consists of solving the maximum likelihood problem for its augmented dataset, we can directly apply standard meta-learning algorithms to this setting. Building off the ideas in MAML (Finn et al., 2017), we can then train a set of model parameters θ such that after a single step of gradient descent it can quickly adapt to the optimal solution for the MLE problem on any of the augmented datasets. This is more formally written as max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ) (6) where L represents a standard classification loss function, α is the learning rate, and the distribution of tasks p(τ) is constructed as described above. For a new query point x, these initial parameters can then quickly be adapted to provide the CNML distribution by taking a gradient step on each augmented dataset to obtain the approximately optimal MLE solution, and normalizing these as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)] This algorithm in principle can be optimized using any standard stochastic optimization method such as SGD, as described in Finn et al. (2017), backpropagating through the inner loop gradient update. For the specific problem setting that we consider, we have to employ some optimization tricks in order to enable learning: A.2.1 IMPORTANCE WEIGHTING ON QUERY POINT Since only one datapoint is augmented to the training set at query time for CNML, it can get challenging for stochastic gradient descent to pay attention to this datapoint with increasing dataset sizes. For example, if we train on an augmented dataset of size 2048 by cycling through it in batch sizes of 32, then only 1 in 64 batches would include the query point itself and allow the model to adapt to the proposed label, while the others would lead to noise in the optimization process, potentially worsening the model’s prediction on the query point. In order to make sure the optimization considers the query point, we include the query point and proposed label (xq, y) in every minibatch that is sampled, but downweight the loss computed on that point such that the overall objective remains unbiased. This is simply doing importance weighting, with the query point downweighted by a factor of d b−1N e where b is the desired batch size and N is the total number of points in the original dataset. To see why the optimization objective remains the same, we can consider the overall loss over the dataset. Let fθ be our classifier, L be our loss function, D′ = {(xi, yi)}Ni=1 ∪ (xq, y) be our augmented dataset, and Bk be the kth batch seen during training. Using standard SGD training that cycles through batches in the dataset, the overall loss on the augmented dataset would be: L(D′) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) If we instead included the downweighted query point in every batch, the overall loss would be: L(D′) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk ( L(fθ(xi), yi) + 1 d b−1N e L(fθ(xq), y) ) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk L(fθ(xi), yi) + db− 1 N e 1 d b−1N e L(fθ(xq), y) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) which is the same objective as before. This trick has the effect of still optimizing the same max likelihood problem required by CNML, but significantly reducing the variance of the query point predictions as we take additional gradient steps at query time. As a concrete example, consider querying a meta-CNML classifier on the input shown in Figure 11. If we adapt to the augmented dataset without including the query point in every batch (i.e. without importance weighting), we see that the query point loss is significantly more unstable, requiring us to take more gradient steps to converge. A.2.2 KERNEL WEIGHTED TRAINING LOSS The augmented dataset consists of points from the original datasetD and one augmented point (xq, y). Given that we mostly care about having the proper likelihood on the query point, with an imperfect optimization process, the meta-training can yield solutions that are not very accurately representing true likelihoods on the query point. To counter this, we introduce a kernel weighting into the loss function in Equation 6 during meta-training and subsequently meta-testing. The kernel weighting modifies the training loss function as: max θ Eτ∼S(τ)[E(x,y)∼τK(x, xτ )L(x, y, θ′)], s.t θ′ = θ−α∇θE(x,y)∼τK(x, xτ )L(x, y, θ) (7) where xτ is the query point for task τ and K is a choice of kernel. We typically choose exponential kernels centered around xτ . Intuitively, this allows the meta-optimization to mainly consider the datapoints that are copies of the query point in the dataset, or are similar to the query point, and ensures that they have the correct likelihoods, instead of receiving interfering gradient signals from the many other points in the dataset. To make hyperparameter selection intuitive, we designate the strength of the exponential kernel by a parameter λdist, which is the Euclidean distance away from the query point at which the weight becomes 0.1. Formally, the weight of a point x in the loss function for query point xτ is computed as: K(x, xτ ) = exp {− 2.3 λdist ||x− xτ ||2} (8) A.2.3 META-TRAINING AT FIXED INTERVALS While in principle meta-NML would retrain with every new datapoint, in practice we retrain metaNML once every k epochs. (In all of our experiments we set k = 1, but we could optionally increase k if we do not expect the meta-task distribution to change much between epochs.) We warm-start the meta-learner parameters from the previous iteration of meta-learning, so every instance of meta-training only requires a few steps. We find that this periodic training is a reasonable enough approximation, as evidenced by the strong performance of BayCRL in our experimental results in Section 6. A.3 META-NML VISUALIZATIONS A.3.1 META-NML WITH ADDITIONAL GRADIENT STEPS Below, we show a more detailed visualization of meta-NML outputs on data from the Zigzag Maze task, and how these outputs change with additional gradient steps. For comparison, we also include the idealized NML rewards, which come from a discrete count-based classifier. Meta-NML is able to resemble the ideal NML rewards fairly well with just 1 gradient step, providing both an approximation of a count-based exploration bonus and better shaping towards the goal due to generalization. By taking additional gradient steps, meta-NML can get arbitrarily close to the true NML outputs, which themselves correspond to inverse counts of 1n+2 as explained in Theorem 4.1. While this would give us more accurate NML estimates, in practice we found that taking one gradient step was sufficient to achieve good performance on our RL tasks. A.3.2 COMPARISON OF REWARD CLASSIFIERS A.3.3 RUNTIME COMPARISONS Below provide the runtimes for feedforward inference, naive CNML, and meta-NML on each of our evaluation domains. We list both the runtimes for evaluating a single input, and for completing a full epoch of training during RL. These benchmarks were performed on an NVIDIA Titan X Pascal GPU. Per-input runtimes are averaged across 100 samples, and per-epoch runtimes are averaged across 20 epochs. A.4 EXPERIMENTAL DETAILS A.4.1 ENVIRONMENTS Zigzag Maze and Spiral Maze: These two navigation tasks require moving through long corridors and avoiding several local optima in order to reach the goal. For example, on Spiral Maze, the agent must not get stuck on the other side of the inner wall, even though that position would be close in L2 distance to the desired goal. On these tasks, a sparse reward is not informative enough for learning, while ordinary classifier methods get stuck in local optima due to poor shaping near the goal. Both of these environments have a continuous state space consisting of the (x, y) coordinates of the agent, ranging from (−4,−4) to (4, 4) inclusive. The action space is the desired velocity in the x and y directions, each ranging from −1 to 1 inclusive. Sawyer 2D Pusher: This task involves using a Sawyer arm, constrained to move only in the xy plane, to push a randomly initialized puck to a fixed location on a table. The state space consists of the (x, y, z) coordinates of the robot end effector and the (x, y) coordinates of the puck. The action space is the desired x and y velocities of the arm. Sawyer Door Opening: In this task, the Sawyer arm is attached to a hook, which it must use to open a door to a desired angle of 45 degrees. The door is randomly initialized each time to be at a starting angle of between 0 and 15 degrees. The state space consists of the (x, y, z) coordinates of the end effector and the door angle (in radians); the action space consists of (x, y, z) velocities. Sawyer 3D Pick and Place: The Sawyer robot must pick up a ball, which is randomly placed somewhere on the table each time, and raise it to a fixed (x, y, z) location high above the table. This represents the biggest exploration challenge out of all the manipulation tasks, as the state space is large and the agent would normally not receive any learning signal unless it happened to pick up the ball and raise it, which is unlikely without careful reward shaping. The state space consists of the (x, y, z) coordinates of the end effector, the (x, y, z) coordinates of the ball, and the tightness of the gripper (a continuous value between 0 and 1). The robot can control its (x, y, z) arm velocity as well as the gripper value. A.4.2 GROUND TRUTH DISTANCE METRICS In addition to the success rate plots in Figure 5, we provide plots of each algorithm’s distance to the goal over time according to environment-specific distance metrics. The distance metrics and success thresholds, which were used to compute the success rates in Figure 5, are listed in the table below. A.5 ADDITIONAL ABLATIONS A.5.1 LEARNING IN A DISCRETE, RANDOMIZED ENVIRONMENT In practice, many continuous RL environments such as the ones we consider in Section 6 have state spaces that are correlated at least roughly with the dynamics. For instance, states that are closer together dynamically are also typically closer in the metric space defined by the states. This correlation does not need to be perfect, but as long as it exists, BayCRL can in principle learn a smoothly shaped reward towards the goal. However, even in the case where states are unstructured and completely lack identity, such as in a discrete gridworld environment, the CNML classifier would still reduce to providing an explorationcentric reward bonus, as indicated by Theorem 4.1, ensuring reasonable worst-case performance. To demonstrate this, we evaluate BayCRL on a variant of the Zigzag Maze task where states are first discretized to a 16 × 16 grid, then "shuffled" so that the xy representation of a state does not correspond to its true coordinates and the states are not correlated dynamically. BayCRL manages to solve the task, while a standard classifier method (VICE) does not. Still, BayCRL is more effective in the original state space where generalization is possible, suggesting that both the exploration and reward shaping abilities of the CNML classifier are crucial to its overall performance. A.5.2 FINDING "HIDDEN" REWARDS NOT INDICATED BY SUCCESS EXAMPLES The intended setup for BayCRL (and classifier-based RL algorithms in general) is to provide a set of success examples to learn from, thus removing the need for a manually specified reward function. However, here we instead consider the case where a ground truth reward function exists which we do not fully know, and can only query through interaction with the environment. In this case, because the human expert has limited knowledge, the provided success examples may not cover all regions of the state space with high reward. An additional advantage of BayCRL is that it is still capable of finding these "unspecified" goals because of its built-in exploration behavior, whereas other classifier methods would operate solely based on the goal examples provided. To see this, we evaluate our algorithm on a two-sided variant of the Zigzag Maze with multiple goals, visualized in Figure 17 to the right. The agent starts in the middle and is provided with 5 goal examples on the far left side of the maze; unknown to it, the right side contains 5 sparse reward regions which are actually closer from its initial position. As shown in Figures 18 and 19, BayCRL manages to find the sparse rewards while other methods do not. BayCRL, although initially guided towards the provided goal examples on the left, continues to explore in both directions and eventually finds the "hidden" rewards on the right. Meanwhile, VICE focuses solely on the provided goals, and gets stuck in a local optima near the bottom left corner. A.6 HYPERPARAMETER AND IMPLEMENTATION DETAILS We describe the hyperparameter choices and implementation details for our experiments here. We first list the general hyperparameters that were shared across runs, then provide tables of additional hyperparameters we tuned over for each domain and algorithm. Goal Examples: For the classifier-based methods in our experiments (VICE and BayCRL), we provide 150 goal examples for each environment at the start of training. These are used as the pool of positive examples when training the success classifier. DDL Reward: We use the version of DDL proposed in Hartikainen et al. (2019) where we provide the algorithm with the ground truth goal state g, then run SAC with a reward function of r(s) = −dπ(s,g), where dπ is the learned dynamical distance function for the policy at the current iteration of training. A.6.2 SPIRAL MAZE HYPERPARAMETERS A.6.1 ZIGZAG MAZE HYPERPARAMETERS A.6.4 SAWYER PICK-AND-PLACE HYPERPARAMETERS A.6.3 SAWYER PUSH HYPERPARAMETERS A.6.5 SAWYER DOOR OPENING HYPERPARAMETERS A.7 PROOF OF THEOREM 1 CONNECTING NML AND INVERSE COUNTS We provide the proof of Theorem 1 here for completeness. Theorem A.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have a separate parameter independently for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the successful outcomes. Then the CNML probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are never observed to be successful, we then recover inverse counts 1N(s)+2 . Proof. In the fully tabular setting, our MLE estimates for p(O|s) are simply given by finding the best parameter ps for each state. The proof then proceeds by simple calculation. For a state with n = N(s) negative occurrences and g = G(s) positive occurrences, the MLE estimate is simply given by gn+g . Now for evaluating CNML, we consider appending another instance for each class. The new parameter after appending a negative example is then gn+g+1 , which then assigns probability n+1 n+g+1 to the negative class. Similarly, after appending a positive example, the new parameter is g+1n+g+1 , so we try to assign probability g+1n+g+1 to the positive class. Normalizing, we have pCNML(O = 1|s) = g + 1 n+ g + 2 . (9) When considering states that have only been visited on-policy, and are not included in the set of successful outcomes, then the likelihood reduces to pCNML(O = 1|s) = 1 n+ 2 . (10)
1. What is the main contribution of the paper regarding reinforcement learning problems? 2. How does the proposed approach differ from previous works, specifically Fu et al.'s (2018b) event framework? 3. What are the strengths and weaknesses of the proposed CNML classifier compared to standard neural network classifiers? 4. Can you provide further analysis or evidence to support the conclusion that CNML performs better than other methods in terms of uncertainty awareness, exploration, and goal-oriented reward shaping? 5. How does the paper's methodology handle differences in task distribution compared to the data used for training the model parameters in meta-learning? 6. Can you elaborate on the computation complexity analysis of the proposed method, particularly when applying meta-learning to solve the CNML problem? 7. Would it be possible to test the CNML estimation with full NN convergence on the entire augmented dataset every time to demonstrate the trade-off between accuracy and computation complexity when compared with meta-learning? 8. Can you offer insights into why BayCRL may converge slower in some problems, such as zigzag, spiral, sawyer, and how this relates to the quality of the reward signal? 9. Could you explain how to interpret the result shown in Figure 4, specifically how BayCRL outperforms in terms of sample complexity? 10. Are there any minor errors or typos in the paper, such as duplicated content or inconsistent notation?
Review
Review This manuscript aims to solve reinforcement learning problems where the reward is unknown but a set of successful states are available. Iteratively, it trains a classifier using provided successful states as positive and on-policy samples as negative and use its predictions as the reward function to learn RL policy. It may be worth clearly explaining the connection and improvement on the work of Fu et al.(2018b, Variational inverse control with events: A general framework for data-driven reward definition) in introduction since both mainly solve the same problems. Fu’s paper first introduced the event framework to generalize inverse RL that also solves the sparse RL problems by iteratively training a classifier to predict the probability on successful states. Compared to previous work, it seems a major difference in this paper is to change the event classification model, replacing the neural network classifier with CNML classifier. It could enhance the contribution if good theoretical analysis can be provided. E.g. why the manuscript concludes that CNML performs better than standard neural network clasisifers in terms of uncertainty aware on reward estimation, and how CNML model connects with better exploration and goal-oriented reward shaping. Other comments are written below. Theorem 4.1 is defined, but it seems not used in the following sections. It would be helpful if the following can explain whether it is used to develop the algorithm or explain empirical findings. It may need more evidence in Section 4.3 to support the conclusion that the CNML is doing better at reward shaping. It is not easy to make a conclusion based on some specific example in Figure 1. Similarly Section 4.1 also try to show CNML gives reward that can improve exploration and becomes goal oriented using the same example. Section 5.1 describes how to use meta learning to approximately solve CNML problem so it can reduce computation complexity. It is an interesting idea, but it may need more analysis. Some questions are listed below. The distribution of tasks usually has different samples, while the CNML problem has similar data sets for its tasks as each task only adds one sample to the original dataset. Does this difference have influence on learning the model parameter in meta learning? It would be interesting to analyze the computation complexity as it is the main reason for applying meta learning in solving CNML. In experiments, It may be interesting to test the CNML estimation with full NN convergence on entire augmented dataset every time, to show the trade-off between accuracy and computation complexity when compared with meta learning. Based on the number of epochs, it seems BayCRL converges slower in some problems, such as zigzag, spiral, sawyer. It may be interesting to give some insights, as good reward may speed the learning process. Moreover, it may be more accurate to consider the time complexity in each epoch when validating the time complexity. On page 7, it may need explanation on how to find that BayCRL outputperforms in terms of sample comcplexity as shown in Figure 4. Some minor comments: On page 5, Appendix Appendix A.5 -> Appendix A.5 It seems Appendix A.2 duplicates Section 5.1, particularly the Eq. 6 and the equations for p_meta-NML and \theta_y. Is there any reason for line 7 in Algorithm 1 that skips meta learning.
ICLR
Title Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples Abstract Exploration in reinforcement learning is, in general, a challenging problem. In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states. In this case, the reward function can be obtained automatically by training a classifier to classify states as successful or not. We argue that, with appropriate representation and regularization, such a classifier can guide a reinforcement learning algorithm to an effective solution. However, as we will show, this requires the classifier to make uncertainty-aware predictions that are very difficult with standard deep networks. To address this, we propose a novel mechanism for obtaining calibrated uncertainty based on an amortized technique for computing the normalized maximum likelihood distribution. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions from data, while also being able to guide algorithms towards the specified goal more effectively. We show how using amortized normalized maximum likelihood for reward inference is able to provide effective reward guidance for solving a number of challenging navigation and robotic manipulation tasks which prove difficult for other algorithms. 1 INTRODUCTION While reinforcement learning (RL) has been shown to successfully solve problems with careful reward design (Rajeswaran et al., 2018; OpenAI et al., 2019), RL in its most general form, with no assumptions on the dynamics or reward function, requires solving a challenging uninformed search problem in which rewards are sparsely observed. Techniques which explicitly provide “rewardshaping” (Ng et al., 1999), or modify the reward function to guide learning, can help take some of the burden off of exploration, but shaped rewards can be difficult to obtain without domain knowledge. In this paper, we aim to reformulate the reinforcement learning problem to make it easier for the user to specify the task and to provide a tractable reinforcement learning objective. Instead of requiring a reward function designed for an objective, our method instead assumes a user-provided set of successful outcome examples: states in which the desired task has been accomplished successfully. The algorithm aims to estimate the distribution over these states and maximize the probability of reaching states that are likely under the distribution. Prior work on learning from success examples (Fu et al., 2018b; Zhu et al., 2020) focused primarily on alleviating the need for manual reward design. In our work, we focus on the potential for this mode of task specification to produce more tractable RL problems and solve more challenging classes of tasks. Intuitively, when provided with explicit examples of successful states, the RL algorithm should be able to direct its exploration, rather than simply hope to randomly chance upon high reward states. The main challenge in instantiating this idea into a practical algorithm is performing appropriate uncertainty quantification in estimating whether a given state corresponds to a successful outcome. Our approach trains a classifier to distinguish successful states, provided by the user, from those generated by the current policy, analogously to generative adversarial networks (Goodfellow et al., 2014) and previously proposed methods for inverse reinforcement learning (Fu et al., 2018a). In general, such a classifier is not guaranteed to provide a good optimization landscape for learning the policy. We discuss how a particular form of uncertainty quantification based on the normalized maximum likelihood (NML) distribution produces better reward guidance for learning. We also connect our approach to count-based exploration methods, showing that a classifier with suitable uncertainty estimates reduces to a count-based exploration method in the absence of any generalization across states, while also discussing how it improves over count-based exploration in the presence of good generalization. We then propose a practical algorithm to train success classifiers in a computationally efficient way with NML, and show how this form of reward inference allows us to solve difficult problems more efficiently, providing experimental results which outperform existing algorithms on a number of navigation and robotic manipulation domains. 2 RELATED WORK A number of techniques have been proposed to improve exploration.These techniques either add reward bonuses that encourage a policy to visit novel states in a task-agnostic manner (Wiering and Schmidhuber, 1998; Auer et al., 2002; Schaul et al., 2011; Houthooft et al., 2016; Pathak et al., 2017; Tang et al., 2017; Stadie et al., 2015; Bellemare et al., 2016; Burda et al., 2018a; O’Donoghue, 2018) or perform Thompson sampling or approximate Thompson sampling based on a prior over value functions (Strens, 2000; Osband et al., 2013; 2016). While these techniques are uninformed about the actual task, we consider a constrained set of problems where examples of successes can allow for more task-directed exploration. In real world problems, designing well-shaped reward functions makes exploration easier but often requires significant domain knowledge (Andrychowicz et al., 2020), access to privileged information about the environment (Levine et al., 2016) and/or a human in the loop providing rewards (Knox and Stone, 2009; Singh et al., 2019b). Prior work has considered specifying rewards by providing example demonstrations and inferring rewards with inverse RL (Abbeel and Ng, 2004; Ziebart et al., 2008; Ho and Ermon, 2016; Fu et al., 2018a). This requires expensive expert demonstrations to be provided to the agent. In contrast, our work has the minimal requirement of simply providing successful outcome states, which can be done cheaply and more intuitively. This subclass of problems is also related to goal conditioned RL (Kaelbling, 1993; Schaul et al., 2015; Zhu et al., 2017; Andrychowicz et al., 2017; Nair et al., 2018; Veeriah et al., 2018; Rauber et al., 2018; Warde-Farley et al., 2018; Colas et al., 2019; Ghosh et al., 2019; Pong et al., 2020) but is more general, since it allows for a more abstract notion of task success. A core idea behind our work is using a Bayesian classifier to learn a suitable reward function. Bayesian inference with expressive models and high dimensional data can often be intractable, requiring assumptions on the form of the posterior (Hoffman et al., 2013; Blundell et al., 2015; Maddox et al., 2019). In this work, we build on the concept of normalized maximum likelihood (Rissanen, 1996; Shtar’kov, 1987), or NML, to learn Bayesian classifiers. Although NML is typically considered from the perspective of optimal coding (Grünwald, 2007; Fogel and Feder, 2018), we show how it can be used to learn success classifiers, and discuss its connections to exploration and reward shaping in RL. 3 PRELIMINARIES In this paper, we study a modified reinforcement learning problem, where instead of the standard reward function, the agent is provided with successful outcome examples. This reformulation not only provides a modality for task specification that may be more natural for users to provide in some settings (Fu et al., 2018b; Zhu et al., 2020; Singh et al., 2019a), but, as we will show, can also make learning easier. We also derive a meta-learned variant of the conditional normalized maximum likelihood (CNML) distribution for representing our reward function, in order to make evaluation tractable. We discuss background on successful outcome examples and CNML in this section. 3.1 REINFORCEMENT LEARNING WITH EXAMPLES OF SUCCESSFUL OUTCOMES We follow the framework proposed by Fu et al. (2018b) and assume that we are provided with a Markov decision process (MDP) without a reward function, given by M, where M = (S,A, T , γ, µ0), as well as successful outcome examples S+ = {sk+}Kk=1, which is a set of states in which the desired task has been accomplished. This formalism is easiest to describe in terms of the control as inference framework (Levine, 2018). The relevant graphical model in Figure 9 consists of states and actions, as well as binary success variables et which represent the occurrence of a particular event. The agent’s objective is to cause this event to occur (e.g., a robot that is cleaning the floor must cause the “floor is clean” event to occur). Formally, we assume that the states in S+ are sampled from the distribution p(st|et = True) – that is, states where the desired event has taken place. In this work, we focus on efficient methods for solving this reformulation of the RL problem, by utilizing a novel uncertainty quantification method to represent the distribution p(et|st). In practice, prior methods that build on this and similar reformulations of the RL problem (Fu et al., 2018b) derive an algorithm where the reward function in RL is produced by a classifier that estimates p(et = True|st). Following the adversarial inverse reinforcement learning (AIRL) derivation (Fu et al., 2018a; Finn et al., 2016), it is possible to show that the correct source of negative examples for training this classifier is the state distribution of the policy itself, π(s). This insight results in a simple algorithm: at each iteration of the algorithm, the policy is updated to maximize the current reward, given by log p(et = True|st), then samples from the policy are added to the set of negative examples S−, and the classifier is retrained on the original positive set S+ and the updated negative set S−. 3.2 CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD Our method builds on the principle of conditional normalized maximum likelihood (NML) (Rissanen and Roos, 2007; Grünwald, 2007; Fogel and Feder, 2018), which we review briefly. CNML is a method for performing k-way classification, given a model class Θ and a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, and has been shown to provide better calibrated predictions and uncertainty estimates with minimax regret guarantees (Bibas et al., 2019). To predict the class of a query point xq, CNML constructs k augmented datasets by adding xq with a different label in each datasets, which we write as D ∪ (xq, y = i), i ∈ (1, 2, ..., k). CNML then defines the class distribution by solving the maximum likelihood estimation problem at query time for each of these augmented datasets to convergence, and normalize the likelihoods as follows: pCNML(y = i|xq) = pθi(y = i|xq)∑k j=1 pθj (y = j|xq) , θi = arg max θ∈Θ E(x,y)∼D∪(xq,y=i)[log pθ(y|x)] (1) Intuitively, if xq is close to other datapoints in D, then the model will struggle to assign a high likelihood to labels that differ substantially from other nearby points. However, if xq is far from all datapoints in D, then the different augmented MLE problems can easily classify xq as an arbitrary class, providing us with a likelihood closer to uniform. We refer readers to Grünwald (2007) for an in-depth discussion. A major limitation of CNML is that it requires training an entire neural network to convergence on the entire augmented dataset every time we want to evaluate a test point’s class probabilities. We will address this issue in Section 5. 4 BAYESIAN SUCCESS CLASSIFIERS FOR REWARD INFERENCE Ideally, training a classifier with the policy samples as negative examples as described in Section 3.1 should yield a smooth decision boundary between the well-separated negative and positive examples. For example, Figure 2 depicts a simple 1-D scenario, where the agent starts at the left (s0) and the positive outcomes are at the right (s+) side of the environment. Since the positives are on the right and the negatives are on the left, one might expect a classifier to gradually increase its prediction of a success as we move to the right (Figure 2a), which would provide a dense reward signal for the policy to move to the right. However, this idealized scenario rarely happens in practice. With- out suitable regularization, the decision boundary between the positive and negative examples may not be smooth. In fact, the decision boundary of an optimal classifier may take on the form of a sharp boundary anywhere between the positive and negative examples in the early stages of training (Figure 2b). As a result, the classifier might provide little to no reward signal for the policy, since it can assign arbitrarily small probabilities to the states sampled from the policy. We note that this issue is not pathological: our experiments in Section 6 show that this poor reward signal issue happens in practice and can greatly hinder learning. In this section, we will discuss how an appropriate classifier training method can avoid these uninformative rewards. 4.1 REGULARIZED SUCCESS CLASSIFIERS VIA NORMALIZED MAXIMUM LIKELIHOOD Algorithm 1 RL with CNML-Based Success Classifiers 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Add on-policy examples to S− by executing π. 5: Sample ntest points from S+ (label 1) and ntest points from S− (label 0) to construct a dataset D 6: Assign state rewards as r(s) = pCNML(e = 1|s,D) 7: Train π with RL algorithm To create effective shaping, we would like our classifier to provide a more informative reward when evaluated at rarely visited states that lie on the path to successful outcomes. A more informative reward function is one that assigns higher rewards to the fringe of the states visited by the policy, because this will encourage the policy to explore and move towards the desired states. We can construct such a reward function by imposing the prior that novel states have a non-negligible chance of being a success state. To do so, we train a Bayesian classifier using conditional normalized maximum likelihood (CNML) (Shtar’kov, 1987), as we described in Section 3, which corresponds to imposing a uniform prior on the output class probabilities. To use CNML for reward inference, the procedure is similar to the one described in Section 3. We construct a dataset using the provided successful outcomes as positives and the on-policy samples as negatives. However, the label probabilities for RL are then produced by the CNML procedure described in Equation 1 to obtain the rewards r(s) = pCNML(e = 1|s). To illustrate how this affects reward assignment during learning, we visualize a potential assignment of rewards with a CNMLbased classifier on the problem described earlier. When the success classifier is trained with CNML instead of standard maximum likelihood, intermediate unseen states would receive non-zero rewards rather than simply having vanishing likelihoods like in Figure 2b. The didactic illustrations in Fig 2c and Fig 2d show how the rewards obtained via NML might incentivize exploration. In fact, the CNML likelihood corresponds to a form of count-based exploration (as we show below), while also providing more directed shaping towards the goal when generalization exists across states. 4.2 RELATIONSHIP TO COUNT-BASED EXPLORATION In this section we relate the success likelihoods obtained via CNML to commonly used exploration methods based on counts. Formally, we prove that the success classifier trained with CNML is equivalent to a version of count-based exploration with a sparse reward function in the absence of any generalization across states (i.e., a fully tabular setting). Theorem 4.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have an independent parameter for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the set of goal examples. Then the CNML success probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are not represented in the goal examples, i.e. G(s) = 0, we then recover inverse counts 1N(s)+2 . Refer to Appendix A.7 for a full proof. 4.3 REWARD SHAPING WITH BAYESIAN SUCCESS CLASSIFIERS While the analysis above suggests that a CNML classifier would give us something akin to a sparse reward plus an exploration bonus, the structure of the problem and the state space actually provides us more information to guide us towards the goal. In most environments (Brockman et al., 2016; Yu et al., 2019) the state space does not consist of independent and uncorrelated categorical variables, and is instead provided in a representation that relates at least roughly to the dynamics structure in the environment. For instance, states close to the goal dynamically are also typically close to the goal in the metric space defined by the states. Indeed, this observation is the basis of many commonly used heuristic reward shaping methods, such as rewards given by Euclidean distance to target states. In this case, the task specification can actually provide more information than simply performing uninformed count-based exploration. Since the uncertainty-aware classifier described in Section 4.1 is built on top of features that are correlated with environment dynamics, and is trained with knowledge of the desired outcomes, it is able to incentivize task-aware directed exploration. As compared to CNML without generalization in Fig 2c, we expect the intermediate rewards to provide more shaping towards the goal. This phenomenon is illustrated intuitively in Fig 2d, and visualized and demonstrated empirically in our experimental analysis in Section 6, where BayCRL is able to significantly outperform methods for task-agnostic exploration. 4.4 OVERVIEW In this section, we introduced the idea of Bayesian classifiers trained via CNML as a means to provide rewards for RL problems specified by examples of successful outcomes. Concretely, a CNML-based scheme has the following advantages: • Natural exploration behavior due to accurate uncertainty estimation in the output success probabilities. This is explained by the connection between CNML and count-based exploration in the discrete case, and benefits from additional generalization in practical environments, as we will see in Section 6. • Better reward shaping by utilizing goal examples to guide the agent more quickly and accurately towards the goal. We have established this benefit intuitively, and will validate it empirically through extensive visualizations and experiments in Section 6. 5 BAYCRL: TRAINING BAYESIAN SUCCESS CLASSIFIERS FOR OUTCOME DRIVEN RL VIA META-LEARNING AND CNML In Section 4, we discussed how Bayesian success classifiers can incentivize exploration and provide reward shaping to guide RL. However, the reward inference technique via CNML described in Section 4.1 is computationally intractable, as it requires optimizing maximum likelihood estimation problems to convergence on every data point we want to query. In this section, we describe a novel approximation that allows us to instantiate this method in practice. 5.1 META-LEARNING FOR CNML We adopt ideas from meta-learning to amortize the cost of obtaining the CNML distribution. As noted in Section 4.1, the computation of the CNML distribution involves repeatedly solving maximum likelihood problems. While computationally daunting, these problems share a significant amount of common structure, which we can exploit to quickly obtain CNML estimates. One set of techniques that are directly applicable is meta-learning for few shot classification. Meta-learning uses a distribution of training problems to explicitly learn models that can quickly adapt to new problems. To apply meta-learning to the CNML problem, we can formulate each of the maximum likelihood problems described in Equation 1 as a separate task for meta-learning, and apply any standard meta-learning technique to obtain a model capable of few-shot adaptation to the MLE problems required for CNML. While any meta-learning algorithm is applicable, we found model agnostic meta-learning (MAML)(Finn et al. (2017)) to be an effective choice of algorithm. In short, MAML tries to learn a model that can quickly adapt to new tasks via a few steps of gradient descent. This procedure is illustrated in Fig 10, and can be described as follows: given a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, 2n different tasks τi can be constructed, each corresponding to performing maximum likelihood estimation on the dataset with a certain proposed label for xi: maxθ E(x,y)∼D∪(xi,y=0)[log p(y|x, θ)] or maxθ E(x,y)∼D∪(xi,y=1)[log p(y|x, θ)]. Given these constructed tasks S(τ), meta-training as described in Finn et al. (2017): max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ). (2) This training procedure gives us parameters θ that can then be quickly adapted to provide the CNML distribution simply by performing a step of gradient descent. The model can be queried for the CNML distribution by starting from θ and taking one step of gradient descent for the query point augmented dataset, each with a different potential label. These likelihoods are then normalized to provide the CNML distribution as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)]. (3) This algorithm, which we call meta-NML, allows us to obtain normalized likelihood estimates without having to retrain maximum likelihood to convergence at every single query point, since the model can now solve maximum likelihood problems of this form very quickly. A complete detailed description and pseudocode of this algorithm are provided in Appendix A.2. Feedforward Meta-NML Naive CNML Single input point 0.0004s 0.0090s 15.19s Epoch of RL 23.50s 39.05s 4hr 13min 34s This makes it several orders of magnitude faster than naive CNML, which would normally require multiple passes through the entire dataset on each input point in order to train to convergence. 5.2 APPLYING META-NML TO SUCCESS CLASSIFICATION Algorithm 2 BayCRL: Bayesian Classifiers for RL 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Collect on-policy examples to add to S− by executing π. 5: if iteration i mod k == 0 then 6: Sample ntrain states from S− to create 2ntrain metatraining tasks 7: Sample ntest total test points equally from S+ (label 1) and S− (label 0) 8: Meta-train θR via meta-NML using Equation 2 9: Assign state rewards via Equation 4 10: Train π with RL algorithm We apply the meta-NML algorithm described above to learning Bayesian success classifiers for providing rewards for reinforcement learning, in our proposed algorithm, which we term BayCRL— Bayesian classifiers for reinforcement learning. Similarly to Fu et al. (2018b), we can train our Bayesian classifier by first constructing a dataset D for binary classification. This is done by using the provided examples of successful outcomes as positives, and on-policy examples collected by the policy as negatives, balancing the number of sam- pled positives and negatives in the dataset. Given this dataset, the Bayesian classifier parameters θR can be trained via meta-NML as described in Equation 2. The classifier can then be used to directly and quickly assign rewards to a state s according to its probabilities r(s) = pmeta-NML(e = 1|s) (via a step of gradient descent, as described in Equation 4), and perform standard reinforcement learning. pmeta-NML(e = 1|s) = pθ1(e = 1|s)∑ i∈{0,1} pθi(e = i|s) (4) θi = θR − α∇θE(sj ,ej)∼D∪(s,e=i)[L(ej , sj , θ)], for i ∈ {0, 1} (5) An overview of this algorithm is provided in Algorithm 2, and full details are in Appendix A.2. The rewards start off at an uninformative value of 0.5 for all unvisited states at the beginning, and close to 1 for successful outcomes. As training progresses, more states are visited, added to the buffer and BayCRL starts to assign them progressively lower reward as they get visited more and more, thereby encouraging visiting of under-visited states. At convergence, all the non successful states will have a reward of close to 0 and states at the goal will have a reward of 0.5, since the numbers of positive and negative labels for successful outcomes will be balanced as described above. 6 EXPERIMENTAL EVALUATION In our experimental evaluation we aim to answer the following questions: (1) Do the learning dynamics of prior classifier-based reward learning methods provide informative rewards for RL? (2) Does using BayCRL help address the exploration challenge when solving RL problems specified by successful outcomes? (3) Does using BayCRL help provide better reward shaping than simply performing naïvely uninformed exploration? To evaluate these questions, we evaluate our proposed algorithm BayCRL with the following setup. Further details and videos can be found at https://sites.google.com/view/baycrl/home 6.1 EXPERIMENTAL SETUP We start off by understanding the algorithm behavior by evaluating it on maze navigation problems, which require avoiding several local optima before truly reaching the goal. Then, to evaluate our method in more complex domains, we consider three robotic manipulation tasks that were previously covered in Singh et al. (2019a) with a Sawyer robot arm: door opening, tabletop object pushing, and 3D object picking. As we show in our results, exploration in these environments is challenging and using naively chosen reward shaping often does not solve the problem at hand. More details on each environment and their associated challenges are available in Appendix A.4.1. We compare with a number of prior algorithms and ablations. To provide a comparison with a standard previous method which uses success classifiers trained with an IRL-based adversarial method, we include the VICE algorithm (Fu et al., 2018b). Note that this algorithm is quite related to BayCRL, but it uses a standard maximum likelihood classifier rather than a Bayesian classifier trained with CNML and meta-learning. We also include a comparison with DDL, a recently proposed technique for learning dynamical distances (Hartikainen et al., 2019). We additionally include comparisons to algorithms for uninformed exploration to show that BayCRL does a more directed form of exploration and reward shaping. To provide an apples-to-apples comparison, we use the same VICE method for training classifiers, but combine it with novelty-based exploration based on random network distillation (Burda et al., 2018b) for the robotic manipulation tasks, and oracle inverse count bonuses for the maze navigation tasks. Finally, to demonstrate the importance of well-shaped rewards, we compare to running Soft Actor-Critic (Haarnoja et al., 2018), a standard RL algorithm for continuous domains, with two naive reward functions: a sparse reward at the goal, and a heuristically shaped reward which uses L2 distance to the goal state. More details on each algorithm and the hyperparameters used are included in Appendix A.6. 6.2 COMPARISONS WITH PRIOR ALGORITHMS We compare with prior algorithms on the domains described above. As we can see in Fig 5, BayCRL is able to very quickly learn how to solve these challenging exploration tasks, often reaching better asymptotic performance than most prior methods, and doing so more efficiently than VICE (Fu et al., 2018b) or DDL (Hartikainen et al., 2019). This suggests that BayCRL is able to provide directed reward shaping and exploration that is substantially better than standard classifier-based methods (e.g., VICE). To isolate whether the benefits purely come from exploration or also from task-aware reward shaping, we compare with methods that only perform uninformed, task-agnostic exploration. On the maze environments, where we can discretize the state space, we compute ground truth count-based bonuses for exploration. For the higher dimensional robotics tasks, we use RND (Burda et al., 2018b). From these comparisons, shown in Fig 5, it is clear that BayCRL significantly outperforms methods that use novelty-seeking exploration, but do not otherwise provide effective reward shaping. In combination with our visualizations in Section 6.4, this suggests that BayCRL is providing useful task-aware reward shaping more effectively than uniformed exploration methods. We also compare BayCRL to a manually heuristically-designed shaped reward function, based on Euclidean distance. As shown in Fig 5, BayCRL generally outperforms simple manual shaping in terms of sample complexity and asymptotic performance, indicating that the learned shaping is non-trivial and adapted to the task. 6.3 ABLATIONS We first evaluate the importance of meta-learning for estimating the NML distribution. In Figure 6, we see that naively estimating the NML distribution by taking a single gradient step and following the same process as evaluating meta-NML, but without any meta-training, results in much worse performance. Second, we analyze the importance of making the BayCRL classifier aware of the task being solved, to understand whether BayCRL is informed by the success examples or simply approximates count-based exploration. To that end, we modify the training procedure so that the dataset D consists of only the on-policy negatives, and add the inferred reward from the Bayesian classifier to the reward obtained by a standard MLE classifier (similarly to the VICE+RND baseline). We see that this performs poorly, showing that the BayCRL classifier is doing more than just performing count-based exploration, and benefits from better reward shaping due to the provided goal examples. Further ablations are available in Appendix A.5. 6.4 ANALYSIS OF BAYCRL BayCRL and Reward Shaping. To better understand how BayCRL provides reward shaping, we visualize the rewards for various slices along the z axis on the Sawyer Pick task, an environment which presents a significant exploration challenge. In Fig 7 we see that the BayCRL rewards clearly correlate with the distance to the object’s goal position, shown as a white star, thus guiding the robot to raise the ball to the desired location even if it has never reached the goal before. In contrast, the MLE classifier has a sharp, poorly-shaped decision boundary. BayCRL and Exploration. Next, to illustrate the connection between BayCRL and exploration, we compare the states visited by BayCRL (which uses a meta-NML classifier) and by VICE (which uses a standard L2-regularized classifier) in Figure 8. We see that BayCRL naturally incentivizes the agent to visit novel states, allowing it to navigate around local minima and reach the true goal. In contrast, VICE learns a misleading reward function that prioritizes closeness to the goal in xy space, causing the agent to stay on the wrong side of the wall. Interestingly, despite incentivizing exploration, BayCRL does not simply visit all possible states; at convergence, it has only covered around 70% of the state space. In fact, we see in the scatterplots in Figure 8 that BayCRL prioritizes states that bring it closer to the goal and ignores ones that don’t, thus making use of the goal examples provided to it. This suggests that BayCRL benefits from a combination of novelty-seeking behavior and effective reward shaping, allowing it to choose new states strategically. 7 DISCUSSION In this work, we consider a subclass of reinforcement learning problems where examples of successful outcomes specify the task. We analyze how solutions via standard success classifiers suffer from shortcomings, and training Bayesian classifiers allows for better exploration to solve challenging problems. We discuss how the NML distribution can provide us a way to train such Bayesian classifiers, providing benefits of exploration and reward shaping. To make learning tractable, we propose a novel meta-learning approach to amortize the NML process. While this work has shown the effectiveness of Bayesian classifiers for reward inference for tasks in simulation, it would be interesting to scale this solution to real world problems. Additionally, obtaining a theoretical understanding of how reward shaping interacts with learning dynamics would be illuminating in designing reward schemes. A APPENDIX A.1 GRAPHICAL MODEL FOR CONTROL AS INFERENCE A.2 DETAILED DESCRIPTION OF META-NML We provide a detailed description of the meta-NML algorithm described in Section 5, and the details of the practical algorithm. Given a dataset D = {(x0, y0), (x1, y1), .., (xn, yn)}, the meta-NML procedure proceeds by first constructing k ∗ n tasks from these data points, for a k shot classification problem. We will keep k = 2 for simplicity in this description, in accordance with the setup of binary success classifiers in RL. Each task τi is constructed by augmenting the dataset with a negative label D ∪ (xi, y = 0) or a positive label D ∪ (xi, y = 1). Now that each task consists of solving the maximum likelihood problem for its augmented dataset, we can directly apply standard meta-learning algorithms to this setting. Building off the ideas in MAML (Finn et al., 2017), we can then train a set of model parameters θ such that after a single step of gradient descent it can quickly adapt to the optimal solution for the MLE problem on any of the augmented datasets. This is more formally written as max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ) (6) where L represents a standard classification loss function, α is the learning rate, and the distribution of tasks p(τ) is constructed as described above. For a new query point x, these initial parameters can then quickly be adapted to provide the CNML distribution by taking a gradient step on each augmented dataset to obtain the approximately optimal MLE solution, and normalizing these as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)] This algorithm in principle can be optimized using any standard stochastic optimization method such as SGD, as described in Finn et al. (2017), backpropagating through the inner loop gradient update. For the specific problem setting that we consider, we have to employ some optimization tricks in order to enable learning: A.2.1 IMPORTANCE WEIGHTING ON QUERY POINT Since only one datapoint is augmented to the training set at query time for CNML, it can get challenging for stochastic gradient descent to pay attention to this datapoint with increasing dataset sizes. For example, if we train on an augmented dataset of size 2048 by cycling through it in batch sizes of 32, then only 1 in 64 batches would include the query point itself and allow the model to adapt to the proposed label, while the others would lead to noise in the optimization process, potentially worsening the model’s prediction on the query point. In order to make sure the optimization considers the query point, we include the query point and proposed label (xq, y) in every minibatch that is sampled, but downweight the loss computed on that point such that the overall objective remains unbiased. This is simply doing importance weighting, with the query point downweighted by a factor of d b−1N e where b is the desired batch size and N is the total number of points in the original dataset. To see why the optimization objective remains the same, we can consider the overall loss over the dataset. Let fθ be our classifier, L be our loss function, D′ = {(xi, yi)}Ni=1 ∪ (xq, y) be our augmented dataset, and Bk be the kth batch seen during training. Using standard SGD training that cycles through batches in the dataset, the overall loss on the augmented dataset would be: L(D′) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) If we instead included the downweighted query point in every batch, the overall loss would be: L(D′) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk ( L(fθ(xi), yi) + 1 d b−1N e L(fθ(xq), y) ) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk L(fθ(xi), yi) + db− 1 N e 1 d b−1N e L(fθ(xq), y) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) which is the same objective as before. This trick has the effect of still optimizing the same max likelihood problem required by CNML, but significantly reducing the variance of the query point predictions as we take additional gradient steps at query time. As a concrete example, consider querying a meta-CNML classifier on the input shown in Figure 11. If we adapt to the augmented dataset without including the query point in every batch (i.e. without importance weighting), we see that the query point loss is significantly more unstable, requiring us to take more gradient steps to converge. A.2.2 KERNEL WEIGHTED TRAINING LOSS The augmented dataset consists of points from the original datasetD and one augmented point (xq, y). Given that we mostly care about having the proper likelihood on the query point, with an imperfect optimization process, the meta-training can yield solutions that are not very accurately representing true likelihoods on the query point. To counter this, we introduce a kernel weighting into the loss function in Equation 6 during meta-training and subsequently meta-testing. The kernel weighting modifies the training loss function as: max θ Eτ∼S(τ)[E(x,y)∼τK(x, xτ )L(x, y, θ′)], s.t θ′ = θ−α∇θE(x,y)∼τK(x, xτ )L(x, y, θ) (7) where xτ is the query point for task τ and K is a choice of kernel. We typically choose exponential kernels centered around xτ . Intuitively, this allows the meta-optimization to mainly consider the datapoints that are copies of the query point in the dataset, or are similar to the query point, and ensures that they have the correct likelihoods, instead of receiving interfering gradient signals from the many other points in the dataset. To make hyperparameter selection intuitive, we designate the strength of the exponential kernel by a parameter λdist, which is the Euclidean distance away from the query point at which the weight becomes 0.1. Formally, the weight of a point x in the loss function for query point xτ is computed as: K(x, xτ ) = exp {− 2.3 λdist ||x− xτ ||2} (8) A.2.3 META-TRAINING AT FIXED INTERVALS While in principle meta-NML would retrain with every new datapoint, in practice we retrain metaNML once every k epochs. (In all of our experiments we set k = 1, but we could optionally increase k if we do not expect the meta-task distribution to change much between epochs.) We warm-start the meta-learner parameters from the previous iteration of meta-learning, so every instance of meta-training only requires a few steps. We find that this periodic training is a reasonable enough approximation, as evidenced by the strong performance of BayCRL in our experimental results in Section 6. A.3 META-NML VISUALIZATIONS A.3.1 META-NML WITH ADDITIONAL GRADIENT STEPS Below, we show a more detailed visualization of meta-NML outputs on data from the Zigzag Maze task, and how these outputs change with additional gradient steps. For comparison, we also include the idealized NML rewards, which come from a discrete count-based classifier. Meta-NML is able to resemble the ideal NML rewards fairly well with just 1 gradient step, providing both an approximation of a count-based exploration bonus and better shaping towards the goal due to generalization. By taking additional gradient steps, meta-NML can get arbitrarily close to the true NML outputs, which themselves correspond to inverse counts of 1n+2 as explained in Theorem 4.1. While this would give us more accurate NML estimates, in practice we found that taking one gradient step was sufficient to achieve good performance on our RL tasks. A.3.2 COMPARISON OF REWARD CLASSIFIERS A.3.3 RUNTIME COMPARISONS Below provide the runtimes for feedforward inference, naive CNML, and meta-NML on each of our evaluation domains. We list both the runtimes for evaluating a single input, and for completing a full epoch of training during RL. These benchmarks were performed on an NVIDIA Titan X Pascal GPU. Per-input runtimes are averaged across 100 samples, and per-epoch runtimes are averaged across 20 epochs. A.4 EXPERIMENTAL DETAILS A.4.1 ENVIRONMENTS Zigzag Maze and Spiral Maze: These two navigation tasks require moving through long corridors and avoiding several local optima in order to reach the goal. For example, on Spiral Maze, the agent must not get stuck on the other side of the inner wall, even though that position would be close in L2 distance to the desired goal. On these tasks, a sparse reward is not informative enough for learning, while ordinary classifier methods get stuck in local optima due to poor shaping near the goal. Both of these environments have a continuous state space consisting of the (x, y) coordinates of the agent, ranging from (−4,−4) to (4, 4) inclusive. The action space is the desired velocity in the x and y directions, each ranging from −1 to 1 inclusive. Sawyer 2D Pusher: This task involves using a Sawyer arm, constrained to move only in the xy plane, to push a randomly initialized puck to a fixed location on a table. The state space consists of the (x, y, z) coordinates of the robot end effector and the (x, y) coordinates of the puck. The action space is the desired x and y velocities of the arm. Sawyer Door Opening: In this task, the Sawyer arm is attached to a hook, which it must use to open a door to a desired angle of 45 degrees. The door is randomly initialized each time to be at a starting angle of between 0 and 15 degrees. The state space consists of the (x, y, z) coordinates of the end effector and the door angle (in radians); the action space consists of (x, y, z) velocities. Sawyer 3D Pick and Place: The Sawyer robot must pick up a ball, which is randomly placed somewhere on the table each time, and raise it to a fixed (x, y, z) location high above the table. This represents the biggest exploration challenge out of all the manipulation tasks, as the state space is large and the agent would normally not receive any learning signal unless it happened to pick up the ball and raise it, which is unlikely without careful reward shaping. The state space consists of the (x, y, z) coordinates of the end effector, the (x, y, z) coordinates of the ball, and the tightness of the gripper (a continuous value between 0 and 1). The robot can control its (x, y, z) arm velocity as well as the gripper value. A.4.2 GROUND TRUTH DISTANCE METRICS In addition to the success rate plots in Figure 5, we provide plots of each algorithm’s distance to the goal over time according to environment-specific distance metrics. The distance metrics and success thresholds, which were used to compute the success rates in Figure 5, are listed in the table below. A.5 ADDITIONAL ABLATIONS A.5.1 LEARNING IN A DISCRETE, RANDOMIZED ENVIRONMENT In practice, many continuous RL environments such as the ones we consider in Section 6 have state spaces that are correlated at least roughly with the dynamics. For instance, states that are closer together dynamically are also typically closer in the metric space defined by the states. This correlation does not need to be perfect, but as long as it exists, BayCRL can in principle learn a smoothly shaped reward towards the goal. However, even in the case where states are unstructured and completely lack identity, such as in a discrete gridworld environment, the CNML classifier would still reduce to providing an explorationcentric reward bonus, as indicated by Theorem 4.1, ensuring reasonable worst-case performance. To demonstrate this, we evaluate BayCRL on a variant of the Zigzag Maze task where states are first discretized to a 16 × 16 grid, then "shuffled" so that the xy representation of a state does not correspond to its true coordinates and the states are not correlated dynamically. BayCRL manages to solve the task, while a standard classifier method (VICE) does not. Still, BayCRL is more effective in the original state space where generalization is possible, suggesting that both the exploration and reward shaping abilities of the CNML classifier are crucial to its overall performance. A.5.2 FINDING "HIDDEN" REWARDS NOT INDICATED BY SUCCESS EXAMPLES The intended setup for BayCRL (and classifier-based RL algorithms in general) is to provide a set of success examples to learn from, thus removing the need for a manually specified reward function. However, here we instead consider the case where a ground truth reward function exists which we do not fully know, and can only query through interaction with the environment. In this case, because the human expert has limited knowledge, the provided success examples may not cover all regions of the state space with high reward. An additional advantage of BayCRL is that it is still capable of finding these "unspecified" goals because of its built-in exploration behavior, whereas other classifier methods would operate solely based on the goal examples provided. To see this, we evaluate our algorithm on a two-sided variant of the Zigzag Maze with multiple goals, visualized in Figure 17 to the right. The agent starts in the middle and is provided with 5 goal examples on the far left side of the maze; unknown to it, the right side contains 5 sparse reward regions which are actually closer from its initial position. As shown in Figures 18 and 19, BayCRL manages to find the sparse rewards while other methods do not. BayCRL, although initially guided towards the provided goal examples on the left, continues to explore in both directions and eventually finds the "hidden" rewards on the right. Meanwhile, VICE focuses solely on the provided goals, and gets stuck in a local optima near the bottom left corner. A.6 HYPERPARAMETER AND IMPLEMENTATION DETAILS We describe the hyperparameter choices and implementation details for our experiments here. We first list the general hyperparameters that were shared across runs, then provide tables of additional hyperparameters we tuned over for each domain and algorithm. Goal Examples: For the classifier-based methods in our experiments (VICE and BayCRL), we provide 150 goal examples for each environment at the start of training. These are used as the pool of positive examples when training the success classifier. DDL Reward: We use the version of DDL proposed in Hartikainen et al. (2019) where we provide the algorithm with the ground truth goal state g, then run SAC with a reward function of r(s) = −dπ(s,g), where dπ is the learned dynamical distance function for the policy at the current iteration of training. A.6.2 SPIRAL MAZE HYPERPARAMETERS A.6.1 ZIGZAG MAZE HYPERPARAMETERS A.6.4 SAWYER PICK-AND-PLACE HYPERPARAMETERS A.6.3 SAWYER PUSH HYPERPARAMETERS A.6.5 SAWYER DOOR OPENING HYPERPARAMETERS A.7 PROOF OF THEOREM 1 CONNECTING NML AND INVERSE COUNTS We provide the proof of Theorem 1 here for completeness. Theorem A.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have a separate parameter independently for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the successful outcomes. Then the CNML probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are never observed to be successful, we then recover inverse counts 1N(s)+2 . Proof. In the fully tabular setting, our MLE estimates for p(O|s) are simply given by finding the best parameter ps for each state. The proof then proceeds by simple calculation. For a state with n = N(s) negative occurrences and g = G(s) positive occurrences, the MLE estimate is simply given by gn+g . Now for evaluating CNML, we consider appending another instance for each class. The new parameter after appending a negative example is then gn+g+1 , which then assigns probability n+1 n+g+1 to the negative class. Similarly, after appending a positive example, the new parameter is g+1n+g+1 , so we try to assign probability g+1n+g+1 to the positive class. Normalizing, we have pCNML(O = 1|s) = g + 1 n+ g + 2 . (9) When considering states that have only been visited on-policy, and are not included in the set of successful outcomes, then the likelihood reduces to pCNML(O = 1|s) = 1 n+ 2 . (10)
1. What is the focus of the paper regarding reinforcement learning and inverse RL? 2. What is the novel approach proposed by the authors in addressing the problem? 3. What are the strengths and weaknesses of the paper, particularly in the motivation, methodology, and experimental results? 4. How does the reviewer assess the originality and quality of the paper's content? 5. Are there any concerns or suggestions for improving the paper, such as elaborating on certain aspects, providing more explanations, or conducting additional experiments?
Review
Review Summary This paper addresses a reinforcement learning problem where the reward function is learned through a classifier that decides whether states are successful or not based on previous examples (i.e. RL after inverse RL). The authors show that this requires uncertainty-aware predictions, which are difficult with neural networks. An algorithm, BayCLR, is proposed that uses MAML to meta-learn the conditional normalized maximum likelihood, i.e. the "maximum likelihood distribution". Connections to the proposed algorithm and exploration methods are discussed before using the algorithm to solve various robotics tasks. Decision Although I liked this paper overall, I am rating it tentatively as marginal below the acceptance threshold. The paper is very well written and addresses a relatively clear problem (inverse RL with classifier) with an interesting method (meta-learning CNML). I have some issues with unclear statements in the motivation and method that should be addressed. While the experiments provide some insight, I think the conclusions the authors draw from them are far stronger than the results imply. Originality I am not very familiar with CNML, but this paper seems very original. In particular, the application of meta-learning to conditional normalized seems novel, as well as its application to inverse RL. Quality and Clarity The paper is well-written, most statements are clear and easy to follow. Strengths The approach of meta-learning CNML is interesting, and if anything deserves further analysis outside of inverse RL / RL. Although I have some issues with the motivation (see below), I think the authors do a good job of explaining their rationale for using CNML. In particular, quantifying neural network uncertainty through posterior analysis is quite difficult. The experiments seem comprehensive. However, I am not very familiar with the environment suite used. Due to the lack of work in this area, there are not many possible baselines, and so the VICE baseline seems like the best choice. The baseline is further kept fair by adding exploration heuristics. I especially like Section 6.2 that analyzes BayCRL on the zigzag maze task, which I assume is a tabular environment. In addition, there is a simple ablation but I would prefer more work on this. Weaknesses In Section 4, some motivating statements are unclear to me (see Detailed Comments). A single data-point being added to the dataset may not change the distribution much, and this crucial point is only addressed (in an ad-hoc way) in the appendix. It would be good to see an ablation study on this, or perhaps a plot of the average difference between different query points. Perhaps Figure 6 may indirectly explain this, but it should be explicitly addressed. In Section 6, many statements about BayCLR seem stronger than the results imply. Many confidence intervals overlap, and the results themselves are hard to parse with so many lines in each plot. Perhaps some of these baselines which are not competitive can be excluded, or perhaps different linestyles should be used. Detailed comments Early mentions of exploration (i.e. in the abstract) seem out of place. While you elaborate on the connection to exploration methods, it does not seem like the main point of this paper is to address exploration. It seems to me that you are tackling a novel RL problem where the task is specified through goal states. Specifically, you learn classifier in place of a reward function, and this is exploited to shape an otherwise sparse reward. Section 4.1, "To create effective shaping, we need to impose a prior on our classifier so that it provides a more informative reward when evaluated at rarely vis- ited states that lie on the path to successful outcomes." Why is a prior strictly necessary for reward shaping? Unless you mean prior in a very general sense, not a Bayesian prior, I don't see why a prior is strictly necessary. Section 4.1, "[CNML].. is essentially imposing a uniform prior over the space of possible outcomes". This is not obvious to me, and perhaps further explanation is needed. Section 4.2, Theorem 4.1: Perhaps I misunderstand but if G ( s ) > 0 , shouldn't p ( e = 1 | s ) = 1 ? Further, why is that when the agent visit a successful state, i.e. N ( s ) increases, p ( e = 1 | s ) decreases after each visit. Section 5.1, "This algorithm, which we call meta-NML, allows us to obtain normalized likelihood estimates without having to retrain maximum likelihood to convergence at every single query point, since the model can now solve maximum likelihood problems of this form very quickly. " Is it correct to say that meta-NML does not need to retrain to convergence at every query point? The second part of the sentence elaborates that metatraining allows you to solve the problem very quickly, but it seems that you still need to solve it at every query point. Section 6.2, Figure 4: I don't think its fair to say that BayCLR performs substantially better. The confidence intervals overlap in all but Spiral Maze and Sawyer 3d Pick-and-Place. Other subtleties are not addressed, such as why RND/count-bonus actually hurt VICE in sawyer 2d push. Other statements such as "significantly more efficiently" need explanation as well. Section 6.3, Figure 5: for reproducibility, you should include exactly how many gradient steps are used in the model without meta-learning. Section 6.4, Figure 6: I'm unfamiliar with the environment being used, so some additional details explaining what z means would be helpful. "Furthermore, meta-NML is able to reasonably approximate the idealized NML rewards with just one gradient step…" How is this shown in Figure 6? I don't see anything showing the idealized NML rewards. Minor Comments Section 4, Line 3: missing space: "ples.For example," Post Rebuttal After reading the comments by the other reviewers, I have decided to keep my score at a 5. The authors reply, and the updated manuscript, helped my understanding of the paper. I was considering raising my score, however, the reviewers were nearly unanimous in their confusion regarding the framework or application of CNML. For future iterations of the paper, I suggest that the authors describe CNML, event-based control and their connection more explicitly. If the main contribution is using CNML as the classifier in event-based control, then it would also help to conduct experiments on meta-learning CNML in a supervised learning setting to further elucidate its effectiveness in the reinforcement learning application. I think your paper is very interesting, and I hope that the authors are able to use this feedback to improve their paper.
ICLR
Title Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples Abstract Exploration in reinforcement learning is, in general, a challenging problem. In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states. In this case, the reward function can be obtained automatically by training a classifier to classify states as successful or not. We argue that, with appropriate representation and regularization, such a classifier can guide a reinforcement learning algorithm to an effective solution. However, as we will show, this requires the classifier to make uncertainty-aware predictions that are very difficult with standard deep networks. To address this, we propose a novel mechanism for obtaining calibrated uncertainty based on an amortized technique for computing the normalized maximum likelihood distribution. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions from data, while also being able to guide algorithms towards the specified goal more effectively. We show how using amortized normalized maximum likelihood for reward inference is able to provide effective reward guidance for solving a number of challenging navigation and robotic manipulation tasks which prove difficult for other algorithms. 1 INTRODUCTION While reinforcement learning (RL) has been shown to successfully solve problems with careful reward design (Rajeswaran et al., 2018; OpenAI et al., 2019), RL in its most general form, with no assumptions on the dynamics or reward function, requires solving a challenging uninformed search problem in which rewards are sparsely observed. Techniques which explicitly provide “rewardshaping” (Ng et al., 1999), or modify the reward function to guide learning, can help take some of the burden off of exploration, but shaped rewards can be difficult to obtain without domain knowledge. In this paper, we aim to reformulate the reinforcement learning problem to make it easier for the user to specify the task and to provide a tractable reinforcement learning objective. Instead of requiring a reward function designed for an objective, our method instead assumes a user-provided set of successful outcome examples: states in which the desired task has been accomplished successfully. The algorithm aims to estimate the distribution over these states and maximize the probability of reaching states that are likely under the distribution. Prior work on learning from success examples (Fu et al., 2018b; Zhu et al., 2020) focused primarily on alleviating the need for manual reward design. In our work, we focus on the potential for this mode of task specification to produce more tractable RL problems and solve more challenging classes of tasks. Intuitively, when provided with explicit examples of successful states, the RL algorithm should be able to direct its exploration, rather than simply hope to randomly chance upon high reward states. The main challenge in instantiating this idea into a practical algorithm is performing appropriate uncertainty quantification in estimating whether a given state corresponds to a successful outcome. Our approach trains a classifier to distinguish successful states, provided by the user, from those generated by the current policy, analogously to generative adversarial networks (Goodfellow et al., 2014) and previously proposed methods for inverse reinforcement learning (Fu et al., 2018a). In general, such a classifier is not guaranteed to provide a good optimization landscape for learning the policy. We discuss how a particular form of uncertainty quantification based on the normalized maximum likelihood (NML) distribution produces better reward guidance for learning. We also connect our approach to count-based exploration methods, showing that a classifier with suitable uncertainty estimates reduces to a count-based exploration method in the absence of any generalization across states, while also discussing how it improves over count-based exploration in the presence of good generalization. We then propose a practical algorithm to train success classifiers in a computationally efficient way with NML, and show how this form of reward inference allows us to solve difficult problems more efficiently, providing experimental results which outperform existing algorithms on a number of navigation and robotic manipulation domains. 2 RELATED WORK A number of techniques have been proposed to improve exploration.These techniques either add reward bonuses that encourage a policy to visit novel states in a task-agnostic manner (Wiering and Schmidhuber, 1998; Auer et al., 2002; Schaul et al., 2011; Houthooft et al., 2016; Pathak et al., 2017; Tang et al., 2017; Stadie et al., 2015; Bellemare et al., 2016; Burda et al., 2018a; O’Donoghue, 2018) or perform Thompson sampling or approximate Thompson sampling based on a prior over value functions (Strens, 2000; Osband et al., 2013; 2016). While these techniques are uninformed about the actual task, we consider a constrained set of problems where examples of successes can allow for more task-directed exploration. In real world problems, designing well-shaped reward functions makes exploration easier but often requires significant domain knowledge (Andrychowicz et al., 2020), access to privileged information about the environment (Levine et al., 2016) and/or a human in the loop providing rewards (Knox and Stone, 2009; Singh et al., 2019b). Prior work has considered specifying rewards by providing example demonstrations and inferring rewards with inverse RL (Abbeel and Ng, 2004; Ziebart et al., 2008; Ho and Ermon, 2016; Fu et al., 2018a). This requires expensive expert demonstrations to be provided to the agent. In contrast, our work has the minimal requirement of simply providing successful outcome states, which can be done cheaply and more intuitively. This subclass of problems is also related to goal conditioned RL (Kaelbling, 1993; Schaul et al., 2015; Zhu et al., 2017; Andrychowicz et al., 2017; Nair et al., 2018; Veeriah et al., 2018; Rauber et al., 2018; Warde-Farley et al., 2018; Colas et al., 2019; Ghosh et al., 2019; Pong et al., 2020) but is more general, since it allows for a more abstract notion of task success. A core idea behind our work is using a Bayesian classifier to learn a suitable reward function. Bayesian inference with expressive models and high dimensional data can often be intractable, requiring assumptions on the form of the posterior (Hoffman et al., 2013; Blundell et al., 2015; Maddox et al., 2019). In this work, we build on the concept of normalized maximum likelihood (Rissanen, 1996; Shtar’kov, 1987), or NML, to learn Bayesian classifiers. Although NML is typically considered from the perspective of optimal coding (Grünwald, 2007; Fogel and Feder, 2018), we show how it can be used to learn success classifiers, and discuss its connections to exploration and reward shaping in RL. 3 PRELIMINARIES In this paper, we study a modified reinforcement learning problem, where instead of the standard reward function, the agent is provided with successful outcome examples. This reformulation not only provides a modality for task specification that may be more natural for users to provide in some settings (Fu et al., 2018b; Zhu et al., 2020; Singh et al., 2019a), but, as we will show, can also make learning easier. We also derive a meta-learned variant of the conditional normalized maximum likelihood (CNML) distribution for representing our reward function, in order to make evaluation tractable. We discuss background on successful outcome examples and CNML in this section. 3.1 REINFORCEMENT LEARNING WITH EXAMPLES OF SUCCESSFUL OUTCOMES We follow the framework proposed by Fu et al. (2018b) and assume that we are provided with a Markov decision process (MDP) without a reward function, given by M, where M = (S,A, T , γ, µ0), as well as successful outcome examples S+ = {sk+}Kk=1, which is a set of states in which the desired task has been accomplished. This formalism is easiest to describe in terms of the control as inference framework (Levine, 2018). The relevant graphical model in Figure 9 consists of states and actions, as well as binary success variables et which represent the occurrence of a particular event. The agent’s objective is to cause this event to occur (e.g., a robot that is cleaning the floor must cause the “floor is clean” event to occur). Formally, we assume that the states in S+ are sampled from the distribution p(st|et = True) – that is, states where the desired event has taken place. In this work, we focus on efficient methods for solving this reformulation of the RL problem, by utilizing a novel uncertainty quantification method to represent the distribution p(et|st). In practice, prior methods that build on this and similar reformulations of the RL problem (Fu et al., 2018b) derive an algorithm where the reward function in RL is produced by a classifier that estimates p(et = True|st). Following the adversarial inverse reinforcement learning (AIRL) derivation (Fu et al., 2018a; Finn et al., 2016), it is possible to show that the correct source of negative examples for training this classifier is the state distribution of the policy itself, π(s). This insight results in a simple algorithm: at each iteration of the algorithm, the policy is updated to maximize the current reward, given by log p(et = True|st), then samples from the policy are added to the set of negative examples S−, and the classifier is retrained on the original positive set S+ and the updated negative set S−. 3.2 CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD Our method builds on the principle of conditional normalized maximum likelihood (NML) (Rissanen and Roos, 2007; Grünwald, 2007; Fogel and Feder, 2018), which we review briefly. CNML is a method for performing k-way classification, given a model class Θ and a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, and has been shown to provide better calibrated predictions and uncertainty estimates with minimax regret guarantees (Bibas et al., 2019). To predict the class of a query point xq, CNML constructs k augmented datasets by adding xq with a different label in each datasets, which we write as D ∪ (xq, y = i), i ∈ (1, 2, ..., k). CNML then defines the class distribution by solving the maximum likelihood estimation problem at query time for each of these augmented datasets to convergence, and normalize the likelihoods as follows: pCNML(y = i|xq) = pθi(y = i|xq)∑k j=1 pθj (y = j|xq) , θi = arg max θ∈Θ E(x,y)∼D∪(xq,y=i)[log pθ(y|x)] (1) Intuitively, if xq is close to other datapoints in D, then the model will struggle to assign a high likelihood to labels that differ substantially from other nearby points. However, if xq is far from all datapoints in D, then the different augmented MLE problems can easily classify xq as an arbitrary class, providing us with a likelihood closer to uniform. We refer readers to Grünwald (2007) for an in-depth discussion. A major limitation of CNML is that it requires training an entire neural network to convergence on the entire augmented dataset every time we want to evaluate a test point’s class probabilities. We will address this issue in Section 5. 4 BAYESIAN SUCCESS CLASSIFIERS FOR REWARD INFERENCE Ideally, training a classifier with the policy samples as negative examples as described in Section 3.1 should yield a smooth decision boundary between the well-separated negative and positive examples. For example, Figure 2 depicts a simple 1-D scenario, where the agent starts at the left (s0) and the positive outcomes are at the right (s+) side of the environment. Since the positives are on the right and the negatives are on the left, one might expect a classifier to gradually increase its prediction of a success as we move to the right (Figure 2a), which would provide a dense reward signal for the policy to move to the right. However, this idealized scenario rarely happens in practice. With- out suitable regularization, the decision boundary between the positive and negative examples may not be smooth. In fact, the decision boundary of an optimal classifier may take on the form of a sharp boundary anywhere between the positive and negative examples in the early stages of training (Figure 2b). As a result, the classifier might provide little to no reward signal for the policy, since it can assign arbitrarily small probabilities to the states sampled from the policy. We note that this issue is not pathological: our experiments in Section 6 show that this poor reward signal issue happens in practice and can greatly hinder learning. In this section, we will discuss how an appropriate classifier training method can avoid these uninformative rewards. 4.1 REGULARIZED SUCCESS CLASSIFIERS VIA NORMALIZED MAXIMUM LIKELIHOOD Algorithm 1 RL with CNML-Based Success Classifiers 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Add on-policy examples to S− by executing π. 5: Sample ntest points from S+ (label 1) and ntest points from S− (label 0) to construct a dataset D 6: Assign state rewards as r(s) = pCNML(e = 1|s,D) 7: Train π with RL algorithm To create effective shaping, we would like our classifier to provide a more informative reward when evaluated at rarely visited states that lie on the path to successful outcomes. A more informative reward function is one that assigns higher rewards to the fringe of the states visited by the policy, because this will encourage the policy to explore and move towards the desired states. We can construct such a reward function by imposing the prior that novel states have a non-negligible chance of being a success state. To do so, we train a Bayesian classifier using conditional normalized maximum likelihood (CNML) (Shtar’kov, 1987), as we described in Section 3, which corresponds to imposing a uniform prior on the output class probabilities. To use CNML for reward inference, the procedure is similar to the one described in Section 3. We construct a dataset using the provided successful outcomes as positives and the on-policy samples as negatives. However, the label probabilities for RL are then produced by the CNML procedure described in Equation 1 to obtain the rewards r(s) = pCNML(e = 1|s). To illustrate how this affects reward assignment during learning, we visualize a potential assignment of rewards with a CNMLbased classifier on the problem described earlier. When the success classifier is trained with CNML instead of standard maximum likelihood, intermediate unseen states would receive non-zero rewards rather than simply having vanishing likelihoods like in Figure 2b. The didactic illustrations in Fig 2c and Fig 2d show how the rewards obtained via NML might incentivize exploration. In fact, the CNML likelihood corresponds to a form of count-based exploration (as we show below), while also providing more directed shaping towards the goal when generalization exists across states. 4.2 RELATIONSHIP TO COUNT-BASED EXPLORATION In this section we relate the success likelihoods obtained via CNML to commonly used exploration methods based on counts. Formally, we prove that the success classifier trained with CNML is equivalent to a version of count-based exploration with a sparse reward function in the absence of any generalization across states (i.e., a fully tabular setting). Theorem 4.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have an independent parameter for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the set of goal examples. Then the CNML success probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are not represented in the goal examples, i.e. G(s) = 0, we then recover inverse counts 1N(s)+2 . Refer to Appendix A.7 for a full proof. 4.3 REWARD SHAPING WITH BAYESIAN SUCCESS CLASSIFIERS While the analysis above suggests that a CNML classifier would give us something akin to a sparse reward plus an exploration bonus, the structure of the problem and the state space actually provides us more information to guide us towards the goal. In most environments (Brockman et al., 2016; Yu et al., 2019) the state space does not consist of independent and uncorrelated categorical variables, and is instead provided in a representation that relates at least roughly to the dynamics structure in the environment. For instance, states close to the goal dynamically are also typically close to the goal in the metric space defined by the states. Indeed, this observation is the basis of many commonly used heuristic reward shaping methods, such as rewards given by Euclidean distance to target states. In this case, the task specification can actually provide more information than simply performing uninformed count-based exploration. Since the uncertainty-aware classifier described in Section 4.1 is built on top of features that are correlated with environment dynamics, and is trained with knowledge of the desired outcomes, it is able to incentivize task-aware directed exploration. As compared to CNML without generalization in Fig 2c, we expect the intermediate rewards to provide more shaping towards the goal. This phenomenon is illustrated intuitively in Fig 2d, and visualized and demonstrated empirically in our experimental analysis in Section 6, where BayCRL is able to significantly outperform methods for task-agnostic exploration. 4.4 OVERVIEW In this section, we introduced the idea of Bayesian classifiers trained via CNML as a means to provide rewards for RL problems specified by examples of successful outcomes. Concretely, a CNML-based scheme has the following advantages: • Natural exploration behavior due to accurate uncertainty estimation in the output success probabilities. This is explained by the connection between CNML and count-based exploration in the discrete case, and benefits from additional generalization in practical environments, as we will see in Section 6. • Better reward shaping by utilizing goal examples to guide the agent more quickly and accurately towards the goal. We have established this benefit intuitively, and will validate it empirically through extensive visualizations and experiments in Section 6. 5 BAYCRL: TRAINING BAYESIAN SUCCESS CLASSIFIERS FOR OUTCOME DRIVEN RL VIA META-LEARNING AND CNML In Section 4, we discussed how Bayesian success classifiers can incentivize exploration and provide reward shaping to guide RL. However, the reward inference technique via CNML described in Section 4.1 is computationally intractable, as it requires optimizing maximum likelihood estimation problems to convergence on every data point we want to query. In this section, we describe a novel approximation that allows us to instantiate this method in practice. 5.1 META-LEARNING FOR CNML We adopt ideas from meta-learning to amortize the cost of obtaining the CNML distribution. As noted in Section 4.1, the computation of the CNML distribution involves repeatedly solving maximum likelihood problems. While computationally daunting, these problems share a significant amount of common structure, which we can exploit to quickly obtain CNML estimates. One set of techniques that are directly applicable is meta-learning for few shot classification. Meta-learning uses a distribution of training problems to explicitly learn models that can quickly adapt to new problems. To apply meta-learning to the CNML problem, we can formulate each of the maximum likelihood problems described in Equation 1 as a separate task for meta-learning, and apply any standard meta-learning technique to obtain a model capable of few-shot adaptation to the MLE problems required for CNML. While any meta-learning algorithm is applicable, we found model agnostic meta-learning (MAML)(Finn et al. (2017)) to be an effective choice of algorithm. In short, MAML tries to learn a model that can quickly adapt to new tasks via a few steps of gradient descent. This procedure is illustrated in Fig 10, and can be described as follows: given a dataset D = {(x0, y0), (x1, y1), ..., (xn, yn)}, 2n different tasks τi can be constructed, each corresponding to performing maximum likelihood estimation on the dataset with a certain proposed label for xi: maxθ E(x,y)∼D∪(xi,y=0)[log p(y|x, θ)] or maxθ E(x,y)∼D∪(xi,y=1)[log p(y|x, θ)]. Given these constructed tasks S(τ), meta-training as described in Finn et al. (2017): max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ). (2) This training procedure gives us parameters θ that can then be quickly adapted to provide the CNML distribution simply by performing a step of gradient descent. The model can be queried for the CNML distribution by starting from θ and taking one step of gradient descent for the query point augmented dataset, each with a different potential label. These likelihoods are then normalized to provide the CNML distribution as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)]. (3) This algorithm, which we call meta-NML, allows us to obtain normalized likelihood estimates without having to retrain maximum likelihood to convergence at every single query point, since the model can now solve maximum likelihood problems of this form very quickly. A complete detailed description and pseudocode of this algorithm are provided in Appendix A.2. Feedforward Meta-NML Naive CNML Single input point 0.0004s 0.0090s 15.19s Epoch of RL 23.50s 39.05s 4hr 13min 34s This makes it several orders of magnitude faster than naive CNML, which would normally require multiple passes through the entire dataset on each input point in order to train to convergence. 5.2 APPLYING META-NML TO SUCCESS CLASSIFICATION Algorithm 2 BayCRL: Bayesian Classifiers for RL 1: User provides success examples S+ 2: Initialize policy π, replay buffer S−, and reward classi- fier parameters θR 3: for iteration i = 1, 2, ... do 4: Collect on-policy examples to add to S− by executing π. 5: if iteration i mod k == 0 then 6: Sample ntrain states from S− to create 2ntrain metatraining tasks 7: Sample ntest total test points equally from S+ (label 1) and S− (label 0) 8: Meta-train θR via meta-NML using Equation 2 9: Assign state rewards via Equation 4 10: Train π with RL algorithm We apply the meta-NML algorithm described above to learning Bayesian success classifiers for providing rewards for reinforcement learning, in our proposed algorithm, which we term BayCRL— Bayesian classifiers for reinforcement learning. Similarly to Fu et al. (2018b), we can train our Bayesian classifier by first constructing a dataset D for binary classification. This is done by using the provided examples of successful outcomes as positives, and on-policy examples collected by the policy as negatives, balancing the number of sam- pled positives and negatives in the dataset. Given this dataset, the Bayesian classifier parameters θR can be trained via meta-NML as described in Equation 2. The classifier can then be used to directly and quickly assign rewards to a state s according to its probabilities r(s) = pmeta-NML(e = 1|s) (via a step of gradient descent, as described in Equation 4), and perform standard reinforcement learning. pmeta-NML(e = 1|s) = pθ1(e = 1|s)∑ i∈{0,1} pθi(e = i|s) (4) θi = θR − α∇θE(sj ,ej)∼D∪(s,e=i)[L(ej , sj , θ)], for i ∈ {0, 1} (5) An overview of this algorithm is provided in Algorithm 2, and full details are in Appendix A.2. The rewards start off at an uninformative value of 0.5 for all unvisited states at the beginning, and close to 1 for successful outcomes. As training progresses, more states are visited, added to the buffer and BayCRL starts to assign them progressively lower reward as they get visited more and more, thereby encouraging visiting of under-visited states. At convergence, all the non successful states will have a reward of close to 0 and states at the goal will have a reward of 0.5, since the numbers of positive and negative labels for successful outcomes will be balanced as described above. 6 EXPERIMENTAL EVALUATION In our experimental evaluation we aim to answer the following questions: (1) Do the learning dynamics of prior classifier-based reward learning methods provide informative rewards for RL? (2) Does using BayCRL help address the exploration challenge when solving RL problems specified by successful outcomes? (3) Does using BayCRL help provide better reward shaping than simply performing naïvely uninformed exploration? To evaluate these questions, we evaluate our proposed algorithm BayCRL with the following setup. Further details and videos can be found at https://sites.google.com/view/baycrl/home 6.1 EXPERIMENTAL SETUP We start off by understanding the algorithm behavior by evaluating it on maze navigation problems, which require avoiding several local optima before truly reaching the goal. Then, to evaluate our method in more complex domains, we consider three robotic manipulation tasks that were previously covered in Singh et al. (2019a) with a Sawyer robot arm: door opening, tabletop object pushing, and 3D object picking. As we show in our results, exploration in these environments is challenging and using naively chosen reward shaping often does not solve the problem at hand. More details on each environment and their associated challenges are available in Appendix A.4.1. We compare with a number of prior algorithms and ablations. To provide a comparison with a standard previous method which uses success classifiers trained with an IRL-based adversarial method, we include the VICE algorithm (Fu et al., 2018b). Note that this algorithm is quite related to BayCRL, but it uses a standard maximum likelihood classifier rather than a Bayesian classifier trained with CNML and meta-learning. We also include a comparison with DDL, a recently proposed technique for learning dynamical distances (Hartikainen et al., 2019). We additionally include comparisons to algorithms for uninformed exploration to show that BayCRL does a more directed form of exploration and reward shaping. To provide an apples-to-apples comparison, we use the same VICE method for training classifiers, but combine it with novelty-based exploration based on random network distillation (Burda et al., 2018b) for the robotic manipulation tasks, and oracle inverse count bonuses for the maze navigation tasks. Finally, to demonstrate the importance of well-shaped rewards, we compare to running Soft Actor-Critic (Haarnoja et al., 2018), a standard RL algorithm for continuous domains, with two naive reward functions: a sparse reward at the goal, and a heuristically shaped reward which uses L2 distance to the goal state. More details on each algorithm and the hyperparameters used are included in Appendix A.6. 6.2 COMPARISONS WITH PRIOR ALGORITHMS We compare with prior algorithms on the domains described above. As we can see in Fig 5, BayCRL is able to very quickly learn how to solve these challenging exploration tasks, often reaching better asymptotic performance than most prior methods, and doing so more efficiently than VICE (Fu et al., 2018b) or DDL (Hartikainen et al., 2019). This suggests that BayCRL is able to provide directed reward shaping and exploration that is substantially better than standard classifier-based methods (e.g., VICE). To isolate whether the benefits purely come from exploration or also from task-aware reward shaping, we compare with methods that only perform uninformed, task-agnostic exploration. On the maze environments, where we can discretize the state space, we compute ground truth count-based bonuses for exploration. For the higher dimensional robotics tasks, we use RND (Burda et al., 2018b). From these comparisons, shown in Fig 5, it is clear that BayCRL significantly outperforms methods that use novelty-seeking exploration, but do not otherwise provide effective reward shaping. In combination with our visualizations in Section 6.4, this suggests that BayCRL is providing useful task-aware reward shaping more effectively than uniformed exploration methods. We also compare BayCRL to a manually heuristically-designed shaped reward function, based on Euclidean distance. As shown in Fig 5, BayCRL generally outperforms simple manual shaping in terms of sample complexity and asymptotic performance, indicating that the learned shaping is non-trivial and adapted to the task. 6.3 ABLATIONS We first evaluate the importance of meta-learning for estimating the NML distribution. In Figure 6, we see that naively estimating the NML distribution by taking a single gradient step and following the same process as evaluating meta-NML, but without any meta-training, results in much worse performance. Second, we analyze the importance of making the BayCRL classifier aware of the task being solved, to understand whether BayCRL is informed by the success examples or simply approximates count-based exploration. To that end, we modify the training procedure so that the dataset D consists of only the on-policy negatives, and add the inferred reward from the Bayesian classifier to the reward obtained by a standard MLE classifier (similarly to the VICE+RND baseline). We see that this performs poorly, showing that the BayCRL classifier is doing more than just performing count-based exploration, and benefits from better reward shaping due to the provided goal examples. Further ablations are available in Appendix A.5. 6.4 ANALYSIS OF BAYCRL BayCRL and Reward Shaping. To better understand how BayCRL provides reward shaping, we visualize the rewards for various slices along the z axis on the Sawyer Pick task, an environment which presents a significant exploration challenge. In Fig 7 we see that the BayCRL rewards clearly correlate with the distance to the object’s goal position, shown as a white star, thus guiding the robot to raise the ball to the desired location even if it has never reached the goal before. In contrast, the MLE classifier has a sharp, poorly-shaped decision boundary. BayCRL and Exploration. Next, to illustrate the connection between BayCRL and exploration, we compare the states visited by BayCRL (which uses a meta-NML classifier) and by VICE (which uses a standard L2-regularized classifier) in Figure 8. We see that BayCRL naturally incentivizes the agent to visit novel states, allowing it to navigate around local minima and reach the true goal. In contrast, VICE learns a misleading reward function that prioritizes closeness to the goal in xy space, causing the agent to stay on the wrong side of the wall. Interestingly, despite incentivizing exploration, BayCRL does not simply visit all possible states; at convergence, it has only covered around 70% of the state space. In fact, we see in the scatterplots in Figure 8 that BayCRL prioritizes states that bring it closer to the goal and ignores ones that don’t, thus making use of the goal examples provided to it. This suggests that BayCRL benefits from a combination of novelty-seeking behavior and effective reward shaping, allowing it to choose new states strategically. 7 DISCUSSION In this work, we consider a subclass of reinforcement learning problems where examples of successful outcomes specify the task. We analyze how solutions via standard success classifiers suffer from shortcomings, and training Bayesian classifiers allows for better exploration to solve challenging problems. We discuss how the NML distribution can provide us a way to train such Bayesian classifiers, providing benefits of exploration and reward shaping. To make learning tractable, we propose a novel meta-learning approach to amortize the NML process. While this work has shown the effectiveness of Bayesian classifiers for reward inference for tasks in simulation, it would be interesting to scale this solution to real world problems. Additionally, obtaining a theoretical understanding of how reward shaping interacts with learning dynamics would be illuminating in designing reward schemes. A APPENDIX A.1 GRAPHICAL MODEL FOR CONTROL AS INFERENCE A.2 DETAILED DESCRIPTION OF META-NML We provide a detailed description of the meta-NML algorithm described in Section 5, and the details of the practical algorithm. Given a dataset D = {(x0, y0), (x1, y1), .., (xn, yn)}, the meta-NML procedure proceeds by first constructing k ∗ n tasks from these data points, for a k shot classification problem. We will keep k = 2 for simplicity in this description, in accordance with the setup of binary success classifiers in RL. Each task τi is constructed by augmenting the dataset with a negative label D ∪ (xi, y = 0) or a positive label D ∪ (xi, y = 1). Now that each task consists of solving the maximum likelihood problem for its augmented dataset, we can directly apply standard meta-learning algorithms to this setting. Building off the ideas in MAML (Finn et al., 2017), we can then train a set of model parameters θ such that after a single step of gradient descent it can quickly adapt to the optimal solution for the MLE problem on any of the augmented datasets. This is more formally written as max θ Eτ∼S(τ)[L(τ, θ′)], s.t θ′ = θ − α∇θL(τ, θ) (6) where L represents a standard classification loss function, α is the learning rate, and the distribution of tasks p(τ) is constructed as described above. For a new query point x, these initial parameters can then quickly be adapted to provide the CNML distribution by taking a gradient step on each augmented dataset to obtain the approximately optimal MLE solution, and normalizing these as follows: pmeta-NML(y|x;D) = pθy (y|x)∑ y∈Y pθy (y|x) , θy = θ − α∇θE(xi,yi)∼D∪(x,y)[L(xi, yi, θ)] This algorithm in principle can be optimized using any standard stochastic optimization method such as SGD, as described in Finn et al. (2017), backpropagating through the inner loop gradient update. For the specific problem setting that we consider, we have to employ some optimization tricks in order to enable learning: A.2.1 IMPORTANCE WEIGHTING ON QUERY POINT Since only one datapoint is augmented to the training set at query time for CNML, it can get challenging for stochastic gradient descent to pay attention to this datapoint with increasing dataset sizes. For example, if we train on an augmented dataset of size 2048 by cycling through it in batch sizes of 32, then only 1 in 64 batches would include the query point itself and allow the model to adapt to the proposed label, while the others would lead to noise in the optimization process, potentially worsening the model’s prediction on the query point. In order to make sure the optimization considers the query point, we include the query point and proposed label (xq, y) in every minibatch that is sampled, but downweight the loss computed on that point such that the overall objective remains unbiased. This is simply doing importance weighting, with the query point downweighted by a factor of d b−1N e where b is the desired batch size and N is the total number of points in the original dataset. To see why the optimization objective remains the same, we can consider the overall loss over the dataset. Let fθ be our classifier, L be our loss function, D′ = {(xi, yi)}Ni=1 ∪ (xq, y) be our augmented dataset, and Bk be the kth batch seen during training. Using standard SGD training that cycles through batches in the dataset, the overall loss on the augmented dataset would be: L(D′) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) If we instead included the downweighted query point in every batch, the overall loss would be: L(D′) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk ( L(fθ(xi), yi) + 1 d b−1N e L(fθ(xq), y) ) = d b−1N e∑ k=0 ∑ (xi,yi)∈Bk L(fθ(xi), yi) + db− 1 N e 1 d b−1N e L(fθ(xq), y) = ( N∑ i=0 L(fθ(xi), yi) ) + L(fθ(xq), y) which is the same objective as before. This trick has the effect of still optimizing the same max likelihood problem required by CNML, but significantly reducing the variance of the query point predictions as we take additional gradient steps at query time. As a concrete example, consider querying a meta-CNML classifier on the input shown in Figure 11. If we adapt to the augmented dataset without including the query point in every batch (i.e. without importance weighting), we see that the query point loss is significantly more unstable, requiring us to take more gradient steps to converge. A.2.2 KERNEL WEIGHTED TRAINING LOSS The augmented dataset consists of points from the original datasetD and one augmented point (xq, y). Given that we mostly care about having the proper likelihood on the query point, with an imperfect optimization process, the meta-training can yield solutions that are not very accurately representing true likelihoods on the query point. To counter this, we introduce a kernel weighting into the loss function in Equation 6 during meta-training and subsequently meta-testing. The kernel weighting modifies the training loss function as: max θ Eτ∼S(τ)[E(x,y)∼τK(x, xτ )L(x, y, θ′)], s.t θ′ = θ−α∇θE(x,y)∼τK(x, xτ )L(x, y, θ) (7) where xτ is the query point for task τ and K is a choice of kernel. We typically choose exponential kernels centered around xτ . Intuitively, this allows the meta-optimization to mainly consider the datapoints that are copies of the query point in the dataset, or are similar to the query point, and ensures that they have the correct likelihoods, instead of receiving interfering gradient signals from the many other points in the dataset. To make hyperparameter selection intuitive, we designate the strength of the exponential kernel by a parameter λdist, which is the Euclidean distance away from the query point at which the weight becomes 0.1. Formally, the weight of a point x in the loss function for query point xτ is computed as: K(x, xτ ) = exp {− 2.3 λdist ||x− xτ ||2} (8) A.2.3 META-TRAINING AT FIXED INTERVALS While in principle meta-NML would retrain with every new datapoint, in practice we retrain metaNML once every k epochs. (In all of our experiments we set k = 1, but we could optionally increase k if we do not expect the meta-task distribution to change much between epochs.) We warm-start the meta-learner parameters from the previous iteration of meta-learning, so every instance of meta-training only requires a few steps. We find that this periodic training is a reasonable enough approximation, as evidenced by the strong performance of BayCRL in our experimental results in Section 6. A.3 META-NML VISUALIZATIONS A.3.1 META-NML WITH ADDITIONAL GRADIENT STEPS Below, we show a more detailed visualization of meta-NML outputs on data from the Zigzag Maze task, and how these outputs change with additional gradient steps. For comparison, we also include the idealized NML rewards, which come from a discrete count-based classifier. Meta-NML is able to resemble the ideal NML rewards fairly well with just 1 gradient step, providing both an approximation of a count-based exploration bonus and better shaping towards the goal due to generalization. By taking additional gradient steps, meta-NML can get arbitrarily close to the true NML outputs, which themselves correspond to inverse counts of 1n+2 as explained in Theorem 4.1. While this would give us more accurate NML estimates, in practice we found that taking one gradient step was sufficient to achieve good performance on our RL tasks. A.3.2 COMPARISON OF REWARD CLASSIFIERS A.3.3 RUNTIME COMPARISONS Below provide the runtimes for feedforward inference, naive CNML, and meta-NML on each of our evaluation domains. We list both the runtimes for evaluating a single input, and for completing a full epoch of training during RL. These benchmarks were performed on an NVIDIA Titan X Pascal GPU. Per-input runtimes are averaged across 100 samples, and per-epoch runtimes are averaged across 20 epochs. A.4 EXPERIMENTAL DETAILS A.4.1 ENVIRONMENTS Zigzag Maze and Spiral Maze: These two navigation tasks require moving through long corridors and avoiding several local optima in order to reach the goal. For example, on Spiral Maze, the agent must not get stuck on the other side of the inner wall, even though that position would be close in L2 distance to the desired goal. On these tasks, a sparse reward is not informative enough for learning, while ordinary classifier methods get stuck in local optima due to poor shaping near the goal. Both of these environments have a continuous state space consisting of the (x, y) coordinates of the agent, ranging from (−4,−4) to (4, 4) inclusive. The action space is the desired velocity in the x and y directions, each ranging from −1 to 1 inclusive. Sawyer 2D Pusher: This task involves using a Sawyer arm, constrained to move only in the xy plane, to push a randomly initialized puck to a fixed location on a table. The state space consists of the (x, y, z) coordinates of the robot end effector and the (x, y) coordinates of the puck. The action space is the desired x and y velocities of the arm. Sawyer Door Opening: In this task, the Sawyer arm is attached to a hook, which it must use to open a door to a desired angle of 45 degrees. The door is randomly initialized each time to be at a starting angle of between 0 and 15 degrees. The state space consists of the (x, y, z) coordinates of the end effector and the door angle (in radians); the action space consists of (x, y, z) velocities. Sawyer 3D Pick and Place: The Sawyer robot must pick up a ball, which is randomly placed somewhere on the table each time, and raise it to a fixed (x, y, z) location high above the table. This represents the biggest exploration challenge out of all the manipulation tasks, as the state space is large and the agent would normally not receive any learning signal unless it happened to pick up the ball and raise it, which is unlikely without careful reward shaping. The state space consists of the (x, y, z) coordinates of the end effector, the (x, y, z) coordinates of the ball, and the tightness of the gripper (a continuous value between 0 and 1). The robot can control its (x, y, z) arm velocity as well as the gripper value. A.4.2 GROUND TRUTH DISTANCE METRICS In addition to the success rate plots in Figure 5, we provide plots of each algorithm’s distance to the goal over time according to environment-specific distance metrics. The distance metrics and success thresholds, which were used to compute the success rates in Figure 5, are listed in the table below. A.5 ADDITIONAL ABLATIONS A.5.1 LEARNING IN A DISCRETE, RANDOMIZED ENVIRONMENT In practice, many continuous RL environments such as the ones we consider in Section 6 have state spaces that are correlated at least roughly with the dynamics. For instance, states that are closer together dynamically are also typically closer in the metric space defined by the states. This correlation does not need to be perfect, but as long as it exists, BayCRL can in principle learn a smoothly shaped reward towards the goal. However, even in the case where states are unstructured and completely lack identity, such as in a discrete gridworld environment, the CNML classifier would still reduce to providing an explorationcentric reward bonus, as indicated by Theorem 4.1, ensuring reasonable worst-case performance. To demonstrate this, we evaluate BayCRL on a variant of the Zigzag Maze task where states are first discretized to a 16 × 16 grid, then "shuffled" so that the xy representation of a state does not correspond to its true coordinates and the states are not correlated dynamically. BayCRL manages to solve the task, while a standard classifier method (VICE) does not. Still, BayCRL is more effective in the original state space where generalization is possible, suggesting that both the exploration and reward shaping abilities of the CNML classifier are crucial to its overall performance. A.5.2 FINDING "HIDDEN" REWARDS NOT INDICATED BY SUCCESS EXAMPLES The intended setup for BayCRL (and classifier-based RL algorithms in general) is to provide a set of success examples to learn from, thus removing the need for a manually specified reward function. However, here we instead consider the case where a ground truth reward function exists which we do not fully know, and can only query through interaction with the environment. In this case, because the human expert has limited knowledge, the provided success examples may not cover all regions of the state space with high reward. An additional advantage of BayCRL is that it is still capable of finding these "unspecified" goals because of its built-in exploration behavior, whereas other classifier methods would operate solely based on the goal examples provided. To see this, we evaluate our algorithm on a two-sided variant of the Zigzag Maze with multiple goals, visualized in Figure 17 to the right. The agent starts in the middle and is provided with 5 goal examples on the far left side of the maze; unknown to it, the right side contains 5 sparse reward regions which are actually closer from its initial position. As shown in Figures 18 and 19, BayCRL manages to find the sparse rewards while other methods do not. BayCRL, although initially guided towards the provided goal examples on the left, continues to explore in both directions and eventually finds the "hidden" rewards on the right. Meanwhile, VICE focuses solely on the provided goals, and gets stuck in a local optima near the bottom left corner. A.6 HYPERPARAMETER AND IMPLEMENTATION DETAILS We describe the hyperparameter choices and implementation details for our experiments here. We first list the general hyperparameters that were shared across runs, then provide tables of additional hyperparameters we tuned over for each domain and algorithm. Goal Examples: For the classifier-based methods in our experiments (VICE and BayCRL), we provide 150 goal examples for each environment at the start of training. These are used as the pool of positive examples when training the success classifier. DDL Reward: We use the version of DDL proposed in Hartikainen et al. (2019) where we provide the algorithm with the ground truth goal state g, then run SAC with a reward function of r(s) = −dπ(s,g), where dπ is the learned dynamical distance function for the policy at the current iteration of training. A.6.2 SPIRAL MAZE HYPERPARAMETERS A.6.1 ZIGZAG MAZE HYPERPARAMETERS A.6.4 SAWYER PICK-AND-PLACE HYPERPARAMETERS A.6.3 SAWYER PUSH HYPERPARAMETERS A.6.5 SAWYER DOOR OPENING HYPERPARAMETERS A.7 PROOF OF THEOREM 1 CONNECTING NML AND INVERSE COUNTS We provide the proof of Theorem 1 here for completeness. Theorem A.1. Suppose we are estimating success probabilities p(e = 1|s) in the tabular setting, where we have a separate parameter independently for each state. Let N(s) denote the number of times state s has been visited by the policy, and let G(s) be the number of occurrences of state s in the successful outcomes. Then the CNML probability pCNML(e = 1|s) is equal to G(s)+1N(s)+G(s)+2 . For states that are never observed to be successful, we then recover inverse counts 1N(s)+2 . Proof. In the fully tabular setting, our MLE estimates for p(O|s) are simply given by finding the best parameter ps for each state. The proof then proceeds by simple calculation. For a state with n = N(s) negative occurrences and g = G(s) positive occurrences, the MLE estimate is simply given by gn+g . Now for evaluating CNML, we consider appending another instance for each class. The new parameter after appending a negative example is then gn+g+1 , which then assigns probability n+1 n+g+1 to the negative class. Similarly, after appending a positive example, the new parameter is g+1n+g+1 , so we try to assign probability g+1n+g+1 to the positive class. Normalizing, we have pCNML(O = 1|s) = g + 1 n+ g + 2 . (9) When considering states that have only been visited on-policy, and are not included in the set of successful outcomes, then the likelihood reduces to pCNML(O = 1|s) = 1 n+ 2 . (10)
1. What is the main contribution of the paper regarding solving RL problems? 2. What are the strengths and weaknesses of the proposed approach, particularly in its use of CNML and meta-learning? 3. Do you have any questions or concerns regarding the experimental results and comparisons with other methods? 4. How does the reviewer assess the novelty and generalization abilities of the proposed approach? 5. Are there any suggestions or recommendations for future work or improvements to the current method?
Review
Review This paper studies how to solve RL problems with a set of success states instead of a standard reward function. The central idea is to firstly train a Bayesian classifier from both the input success examples and the on-policy sampling using the conditional normalized maximum likelihood (CNML) and then use the learned classifier as a reward function to guide exploration. It is proved that in a tabular case, the success classifier trained with CNML is equivalent to a version of count-based exploration and it is claimed that with function approximation, the classifier attains non-negligible generalization. Empirically, it is claimed that this approach outperforms existing algorithms on a number of navigation and robotic manipulation domains. The novelty of this work lies in the use of CNML to train a more regularized success classifier and further use meta-learning to implement CNML in practice. There are several concerns I have: Can the authors indicate the following information in the experiment: Which algorithms are provided with success examples as prior knowledge in all testing domains? The descriptions in Sec. 6.1 and 6.2 are a little confusing. In Figure 4, I didn’t see the lines for VICE+count-bonus. The claim that BayCRL outperforms uninformed, task-agnostic exploration (VICE+count-bonus and VICE + RND ?) is not surprising since the former has prior knowledge. The authors claimed that the proposed approach can achieve both effective reward-shaping and exploration. I agree with the first point by comparing it with other IRL methods. But how is the latter true? I think this needs to be further demonstrated in a different setting such as the one in the next point. All the tested domains have only one success state. Thus, the example set is informational complete. I wonder about the robustness of the algorithm if the example set does not contain all success states while a reward function (which is surely super sparse) is provided. For example, if there are 10 success ground-truth states and only 5 are provided in the example set and the uncovered states are quite remote from the provided ones (but a reward function is available, i.e., if we reach these hidden success states, a positive reward will be received), then how would this degenerate the performance of the proposed approach comparing with other methods? I think this also testifies the generalization/exploration ability of the algorithm from another perspective. I vote for weak reject since both CNML and MAML including the reformulation of the problem follow the prior works, which kind of limits the novelty of this paper as applying known algorithms to a defined problem. But I am open to adjusting the score if the rebuttal can address my concerns.
ICLR
Title On Feature Diversity in Energy-based Models Abstract Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches. An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration. In this paper, we focus on the diversity of the produced feature set. We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs. We derive generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we show that indeed reducing redundancy of the feature set can consistently decrease the gap between the true and empirical expectation of the energy and boosts the performance of the model. 1 INTRODUCTION The energy-based learning paradigm was first proposed by Zhu & Mumford (1998); LeCun et al. (2006) as an alternative to probabilistic graphical models (Koller & Friedman, 2009). As their name suggests, energy-based models (EBMs) map each input ‘configuration’ to a single scalar, called the ‘energy’. In the learning phase, the parameters of the model are optimized by associating the desired configurations with small energy values and the undesired ones with higher energy values (Kumar et al., 2019; Song & Ermon, 2019; Yang et al., 2016). In the inference phase, given an incomplete input configuration, the energy surface is explored to find the remaining variables which yield the lowest energy. EBMs encapsulate solutions to several supervised approaches (LeCun et al., 2006; Fang & Liu, 2016) and unsupervised learning problems (Deng et al., 2020; Bakhtin et al., 2021; Zhao et al., 2020; Xu et al., 2022) and provide a common theoretical framework for many learning models, including traditional discriminative (Zhai et al., 2016; Li et al., 2020) and generative (Zhu & Mumford, 1998; Xie et al., 2017b; Zhao et al., 2017; Che et al., 2020; Khalifa et al., 2021) approaches. Formally, let us denote the energy function by E(h,x,y), where h = GW (x) represents the model with parameters W to be optimized during training and x,y are sets of variables. Figure 1 illustrates how classification, regression, and implicit regression can be expressed as EBMs. In Figure 1 (a), a regression scenario is presented. The input x, e.g., an image, is transformed using an inner model GW (x) and its distance, to the second input y is computed yielding the energy function. A valid energy function in this case can be the L1 or the L2 distance. In the binary classification case (Figure 1 (b)), the energy can be defined as E(h,x,y) = −yGW (x) . In the implicit regression case (Figure 1 (c)), we have two inner models and the energy can be defined as the L2 distance between their outputs E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. In the inference phase, given an input x, the label y∗ can be obtained by solving the following optimization problem: y∗ = argmin y E(h,x,y). (1) An EBM typically relies on an inner model, i.e., GW (x), to generate the desired energy landscape (LeCun et al., 2006). Depending on the problem at hand, this function can be constructed as a linear projection, a kernel method, or a neural network and its parameters are optimized in a data-driven manner in the training phase. Formally, GW (x) can be written as GW (x) = D∑ i wiϕi(x), (2) where {ϕ1(·), · · · , ϕD(·)} is the feature set, which can be hand-crafted, separately trained from unlabeled data (Zhang & LeCun, 2017), or modeled by a neural network and optimized in the training phase of the EBM model (Xie et al., 2016; Yu et al., 2020; Xie et al., 2021). In the rest of the paper, we assume that the inner models GW defined in the energy-based learning system (Figure 1) are obtained as a weighted sum of different features as expressed in equation 2. In (Zhang, 2013), it was shown that simply minimizing the empirical energy over the training data does not theoretically guarantee the minimization of the expected value of the true energy. Thus, developing and motivating novel regularization techniques is required (Zhang & LeCun, 2017). We argue that the quality of the feature set {ϕ1(·), · · · , ϕD(·)} plays a critical role in the overall performance of the global model. In this work, we extend the theoretical analysis of (Zhang, 2013) and focus on the ‘diversity’ of this set and its effect on the generalization ability of the EBM models. Intuitively, it is clear that a less correlated set of intermediate representations is richer and thus able to capture more complex patterns in the input. Thus, it is important to avoid redundant features for achieving a better performance. However, a theoretical analysis is missing. We start by quantifying the diversity of a set of feature functions. To this end, we introduce ϑ− τ -diversity: Definition 1 ((ϑ− τ )-diversity). A set of feature functions, {ϕ1(·), · · · , ϕD(·)} is called ϑ-diverse, if there exists a constant ϑ ∈ R, such that for every input x we have 1 2 D∑ i ̸=j (ϕi(x)− ϕj(x))2 ≥ ϑ2 (3) with a high probability τ . Intuitively, if two feature maps ϕi(·) and ϕj(·) are non-redundant, they have different outputs for the same input with a high probability. However, if, for example, the features are extracted using a neural network with a ReLU activation function, there is a high probability that some of the features associated with the input will be zero. Thus, defining a lower bound for the pair-wise diversity directly is impractical. Therefore, we quantify diversity as the lower-bound over the sum of the pair-wise distances of the feature maps as expressed in equation 3 and ϑ measures the diversity of a set. In machine learning context, diversity has been explored in ensemble learning (Li et al., 2012; Yu et al., 2011; Li et al., 2017), sampling (Derezinski et al., 2019; Bıyık et al., 2019), ranking (Wu et al., 2019; Qin & Zhu, 2013), pruning (Singh et al., 2020; Lee et al., 2020), and neural networks (Xie et al., 2015; Shen et al., 2021). In Xie et al. (2015; 2017a), it was shown theoretically and experimentally that avoiding redundancy over the weights of a neural network using the mutual angles as a diversity measure improves the generalization ability of the model. In this work, we explore a new line of research, where diversity is defined over the feature maps directly, using the (ϑ− τ )-diversity, in the context of energy-based learning. In (Zhao et al., 2017), a similar idea was empirically explored. A “repelling regularizer” was proposed to force non-redundant or orthogonal feature representations. Moreover, the idea of learning while avoiding redundancy has been used recently in the context of semi-supervised learning (Zbontar et al., 2021; Bardes et al., 2021). Reducing redundancy by minimizing the cross-correlation of features learned using a Siamese network (Zbontar et al., 2021) was empirically shown to improve the generalization ability, yet a theoretical analysis to prove this has so far been lacking. In this paper, we close the gap between empirical experience and theory. We theoretically study the generalization ability of EBMs in different learning contexts, i.e., regression, classification, implicit regression, and we derive new generalization bounds using the (ϑ−τ )-diversity providing theoretical guarantees that avoiding redundancy indeed improves the generalization ability of the model. The contributions of this paper can be summarized as follows: • We explore a new line of research, where diversity is defined over the features representing the input data and not over the model’s parameters. To this end, we introduce (ϑ − τ )- diversity as a quantification of the diversity of a given feature set. • We extend the theoretical analysis (Zhang, 2013) and study the effect of avoiding redundancy of a feature set on the generalization of EBMs (Lemmas 3 to 7 and Theorem 1 to 5). • We derive bounds for the expectation of the true energy in different learning contexts, i.e., regression, classification, and implicit regression, using different energy functions. Our analysis consistently shows that avoiding redundancy by increasing the diversity of the feature set can boost the performance of an EBM. 2 PAC-LEARNING OF EBMS WITH (ϑ− τ )-DIVERSITY In this section, we derive a qualitative justification for (ϑ−τ )-diversity using probably approximately correct (PAC) learning (Valiant, 1984; Mohri et al., 2018; Li et al., 2019). The PAC-based theory for standard EBMs has been established in (Zhang, 2013). First, we start by defining Rademacher complexity: Definition 2. (Bartlett & Mendelson, 2002; Mohri et al., 2018) For a given dataset with m samples S = {xi, yi}mi=1 from a distribution D and for a model space F : X → R with a single dimensional output, the Empirical Rademacher complexity R̂m(F) of the set F is defined as follows: R̂m(F) = Eσ [ sup f∈F 1 m m∑ i=1 σif(xi) ] , (4) where the Rademacher variables σ = {σ1, · · · , σm} are independent uniform random variables in {−1, 1}. The Rademacher complexity Rm(F) is defined as the expectation of the Empirical Rademacher complexity over training set, i.e., Rm(F) = ES∼Dm [R̂m(F)]. Based on this quantity, (Bartlett & Mendelson, 2002), several learning guarantees for EBMs have been shown (Zhang, 2013). We recall the following two lemmas related to the estimation error and the Rademacher complexity. In Lemma 2, we present the principal PAC-learning bound for energy functions with finite outputs. Lemma 1. (Wolf, 2018) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. We have Rm(A) ≤ LgRm(F). (5) Lemma 2. (Zhang, 2013) For a well-defined energy function E(h,x,y) over hypothesis class H, input set X and output set Y (LeCun et al., 2006), the following holds for all h in H with a probability of at least 1− δ E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 2Rm(E) +M √ log(2/δ) 2m , (6) where E is the energy function class defined as E = {E(h,x,y)|h ∈ H}, Rm(E) is its Rademacher complexity, and M is the upper bound of E . Lemma 2 provides a generalization bound for EBMs with well-defined (non-negative) and bounded energy. The expected energy is bounded using the sum of three terms: The first term is the empirical expectation of energy over the training data, the second term depends on the Rademacher complexity of the energy class, and the third term involves the number of the training data m and the upperbound of the energy function M . This shows that merely minimizing the empirical expectation of energy, i.e., the first term, may not yield a good approximation of the true expectation. In (Zhang & LeCun, 2017), it has been shown that regularization using unlabeled data reduces the second and third terms leading to better generalization. In this work, we express these two terms using the (ϑ− τ )-diversity and show that employing a diversity strategy may also decrease the gap between the true and empirical expectation of the energy. In Section 2.1, we consider the special case of regression and derive two bounds for two energy functions based on L1 and L2 distances. In Section 2.2, we derive a bound for the binary classification task using as energy function E(h,x,y) = −yGW (x) (LeCun et al., 2006). In Section 2.3, we consider the case of implicit regression, which encapsulates different learning problems such as metric learning, generative models, and denoising (LeCun et al., 2006). For this case, we use the L2 distance between the inner models as the energy function. In the rest of the paper, we denote the generalization gap, E(x,y)∼D[E(h,x,y)]− 1m ∑ (x,y)∈S E(h,x,y) by ∆D,SE. All the proofs are presented in the supplementary material. 2.1 REGRESSION TASK Regression can be formulated as an energy-based learning problem (Figure 1 (a)) using the inner model h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x). We assume that the feature set is positive and well-defined over the input domain X , i.e., ∀x ∈ X : ||Φ(x)||2 ≤ A, the hypothesis class can be defined as follows: H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, the output set Y ⊂ R is bounded, i.e., y < B, and the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ . The two valid energy functions which can be used for regression are E2(h,x,y) = 12 ||GW (x)−y|| 2 2 and E1(h,x,y) = ||GW (x)−y||1 (LeCun et al., 2006). We study these two cases separately and we show theoretically that for both energy functions avoiding redundancy improves generalization of the EBM model. ENERGY FUNCTION: E2 In this subsection, we present our theoretical analysis on the effect of diversity on the generalization ability of an EBM defined with the energy function E2(h,x,y) = 12 ||GW (x) − y|| 2 2. We start by the following two Lemmas 3 and 4. Lemma 3. With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2). (7) Lemma 4. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (8) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h, x, y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. Lemmas 3 and 4 bound the supremum of the output of the inner model and the energy function as a function of ϑ, respectively. As it can been seen, both terms are decreasing with respect to diversity. Next, we bound the Rademacher complexity of the energy class, i.e., Rm(E). Lemma 5. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F). (9) Lemma 5 expresses the bound of the Rademacher complexity of the energy class using the diversity constant and the Rademacher complexity of the features. Having expressed the different terms of Lemma 2 using diversity, we now present our main result for an energy-basel model trained defined using E2. The main result is presented in Theorem 1. Theorem 1. For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (10) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Theorem 1 express the special case of Lemma 2 using the (ϑ − τ )-diversity of the feature set {ϕ1(·), · · · , ϕD(·)}. As it can been seen, the bound of the generalization error is inversely proportional to ϑ2. This theoretically shows that reducing redundancy, i.e., increasing ϑ, reduces the gap between the true and the empirical energies and improves the generalization performance of the EBMs. ENERGY FUNCTION: E1 In this subsection, we consider the second case of regression using the energy function E1(h,x,y) = ||GW (x) − y||1. Similar to the previous case, we start by deriving bounds for the energy function and the Rademacher complexity of the class using diversity in Lemmas 6 and 7. Lemma 6. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (11) Lemma 7. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F). (12) Next, we derive the main result of the generalization of the EBMs defined using the energy function E1. The main finding is presented in Theorem 2. Theorem 2. For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (13) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Similar to Theorem 1, in Theorem 2, we consistently find that the bound of the true expectation of the energy is a decreasing function with respect to ϑ. This proves that for the regression task reducing redundancy can improve the generalization performance of the energy-based model. 2.2 BINARY CLASSIFIER Here, we consider the problem of binary classification, as illustrated in Figure 1 (b). Using the same assumption as in regression for the inner model, i.e., h(x) = GW (x) = ∑D i=1 wiϕi(x) = wTΦ(x), energy function of E(h,x,y) = −yGW (x) (LeCun et al., 2006), and the (ϑ−τ )-diversity of the feature set, we express Lemma 2 for this specific configuration in Theorem 3. Theorem 3. For the energy function E(h,x,y) = −yGW (x), over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m . (14) Similar to the regression task, we note that the upper-bound of the true expectation is a decreasing function with respect to the diversity term. Thus, a less redundant feature set, i.e., higher ϑ, has a lower upper-bound for the true energy. 2.3 IMPLICIT REGRESSION In this section, we consider the problem of implicit regression. This is a general formulation of a different set of problems such as metric learning, where the goal is to learn a distance function between two domains, image denoising, object detection as illustrated in (LeCun et al., 2006), or semi-supervised learning (Zbontar et al., 2021). This form of EBM (Figure 1 (c)) has two inner models, G1W (·) and G2W (·), which can be equal or different according to the problem at hand. Here, we consider the general case, where the two models correspond to two different combinations of different features, i.e., G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) and G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y). Thus, we have a different (ϑ− τ )-diversity term for each set. The final result is presented in Theorem 4. Theorem 4. For the energy function E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {h(1)(x) = G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), h(2)(x) = G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x : ||Φ(1)(x)||2 ≤ A(1), ∀y : ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H: ∆D,SE ≤ 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) +(J1 + J2) √ log(2/δ) 2m , (15) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . The upper-bound of the energy model depends on the diversity variable of both feature sets. Moreover, we note that the bound for the implicit regression decreases proportionally to ϑ2, as opposed to the classification case for example, where the bound is proportional to ϑ. Thus, we can conclude that reducing redundancy improves the generalization of EBM in the implicit regression context. 2.4 GENERAL DISCUSSION We note that the theory developed in our paper (Theorems 1 to 4) is agnostic to the loss function (LeCun et al., 2006) or the optimization strategy used (Kumar et al., 2019; Song & Ermon, 2019; Yu et al., 2020; Xu et al., 2022). We show that reducing the redundancy of the features consistently decreases the upper-bound of the true expectation of the energy and, thus, can boost the generalization performance of the energy-based model. It also should be noted that A, i.e., the upper bound of the features and ϑ are connected. But our findings can be interpreted as follows: given two models with the same value of A (maximum L2norm of the features), the model with higher diversity ϑ has a lower generalization bound and is likely to generalize better. We note that our analysis is independent of how the features are obtained, e.g., handcrafted or optimized. In fact, in the recent state-of-the-art EBMs (Khalifa et al., 2021; Bakhtin et al., 2021; Yu et al., 2020), the features are typically parameterized using a deep learning model and optimized during training. Our contribution is twofold. First, we provide theoretical guarantees that reducing redundancy in the feature space can indeed improve the generalization of the EBM. This can pave the way toward providing theoretical guarantees for WORKS ON SELF-SUPERVISED LEARNING using redundancy reduction Zbontar et al. (2021); Bardes et al. (2021); Zhao et al. (2017). Second, our theory can be used to motivate novel redundancy reduction strategies, for example, in the form of regularization, to avoid learning redundant features. Such strategies can improve the performance of the model and improve generalization. 3 SIMPLE REGULARIZATION ALGORITHM In general, theoretical generalization bounds can be too loose to be direct practical implications (Zhang et al., 2017; Neyshabur et al., 2017). However, they typically suggest a regularizer to promote some desired aspects of the hypothesis class (Xie et al., 2015; Li et al., 2019; Kawaguchi et al., 2017). Accordingly, inspired by the theoretical analysis in Section 2, we propose a straightforward strategy to avoid learning redundant features by regularizing the model during the training using a term inversely proportional to ϑ− τ -diversity of the features. Given an EBM model with a learnable feature set {ϕ1(·), · · · , ϕD(·)} and a training set S, we propose to augment the original training loss L as follows: Laug = L− β ∑ x∈S D∑ i̸=j (ϕi(x)− ϕj(x))2, (16) where β is a hyper-parameter controlling the contribution of the second term in the total loss. The additional term penalizes the similarities between the distinct features ensuring learning a diverse and non-redundant mapping of the data. As a result, this can improve the general performance of our model. 3.1 TOY EXAMPLE We test our regularization strategy first using a toy data. We use an EBM model to learn the distribution of a 2-D Swiss roll illustrated in Figure 2 (a). For the EBM, we use a fully connected neural network composed of two intermediate layers with 1000 units and ReLu activations. We train the models using Stochastic Gradient Langevin Dynamics (SGLD) sampling and the contrastive divergence-like algorithm proposed in (Du & Mordatch, 2019). The total objective of the standard EBM is expressed as follows: L = 1 N ∑ n ( α ( E(x+n ) 2 + E(x−n ) 2) + E(x+n )− E(x−n ) ) , (17) where x+n denote positive samples and x − n negative samples. We augment this loss using equation 16, i.e., the features are the latent representations obtained at the last intermediate layer. The distribution learned using both the standard and the proposed approach are illustrated using the kernel density estimation (Terrell & Scott, 1992) in Figure 2. As it can be seen, avoiding redundancy boosts the performance of the EBM model. Indeed, by comparing the two learned distributions, the EBM trained with our approach led to a better approximation of the ground-truth distribution and was able to better capture the tail of the distribution as opposed to the original EBM. 3.2 IMAGE GENERATION EXAMPLE Recently, there has been a high interest in using EBMs to solve image/text generation tasks Du & Mordatch (2019); Du et al. (2021); Khalifa et al. (2021); Deng et al. (2020). In this subsection, we validate the proposed regularizer on the simple example of MNIST digits image generation, as in (Du & Mordatch, 2019). For the EBM model, we use a simple CNN model composed of four convolutional layers followed by a linear layer. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics Markov chain Monte Carlo (MCMC) and a sampling buffer to accelerate training. The full details are available in the supplementary material. In this example, the features, i.e., the latent representation obtained at the last intermediate layer, are learned in an end-to-end way. We evaluate the performance of our approach by augmenting the contrastive divergence loss using equation 16 to penalize the feature redundancy. We quantitatively evaluate image quality of EBMs with ‘Fréchet Inception Distance’ (FID) score (Heusel et al., 2017) and the negative log-likelihood (NLL) loss in Table 1 for different values of β. We note that we obtain consistently better FID and NLL scores by penalizing the similarity of the learned features. The best performance is achieved by β = 1e−13, which yields more than 10%, in terms of FID, improvement compared to the original EBM model. To gain insights into the visual performance of our approach, we plot a few intermediate samples of the MCMC sampling (Langevin Dynamics). The results obtained by the EBM with β = 1e−13 are presented in Figure 3. Initiating from random noise, MCMC obtains reasonable figures after only 64 steps. The digits get clearer and more realistic over the iterations. More results are presented in the supplementary material. 3.3 CONTINUAL LEARNING EXAMPLE In this subsection, we validate the proposed regularizer on the Continual Learning (CL) problem. CL tackles the problem of catastrophic forgetting in deep learning models (Parisi et al., 2019; Li & Hoiem, 2017; Shibata et al., 2021). Its main goal is to solve several tasks sequentially without forgetting knowledge learned from the past. So, a continual learner is expected to learn a new task, crucially, without forgetting previous tasks. Recently, an EBM-based CL approach was proposed in (Li et al., 2020) and led to superior results compared to standard approaches. We use the same models and the same experimental protocol used in (Li et al., 2020). However, here we focus only on the class-incremental learning task using CIFAR10 and CIFAR100. We evaluate the performance of our proposed regularizer using both the boundary-aware and boundary-agnostic settings. As defined in (Li et al., 2020), the boundary-aware refers to the situation where the sequence of the tasks has explicit separation between them which is known to the model. The boundary agnostic case refers to the situation where the data distributions gradually changes without a notion of task boundaries. Similar to Section 3.2, we consider as ’features’ the representation obtained by the last intermediate layer. The proposed regularizer is applied on top of this representation. In Table 2, we report the performance of the EBM trained using the original loss and using the loss augmented with our additional term for different values of β. As shown in Table 2, penalizing feature similarity and promoting the diversity of the feature set boosts the performance of the EBM model and consistently leads to a superior accuracy for both datasets. In Figure 4, we display the accumulated classification accuracy, averaged over tasks, on the test set. Along the five tasks, our approach maintains higher classification accuracy than the standard EBM for both the boundary-aware and boundary-agnostic settings. 4 CONCLUSION Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative systems. An EBM is typically formed of one (or many) inner models which learn a combination of different features to generate an energy mapping for each input configuration. In this paper, we introduced a feature diversity concept, i.e., (ϑ − τ )-diversity, and we used it to extend the PAC theory of EBMs. We derived different generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we consistently found that reducing the redundancy of the feature set can improve the generalization error of energy-based approaches. We also note that our theory is independent of the loss function or the training strategy used to optimize the parameters of the EBM. This provides theoretical guarantees on learning via feature redundancy reduction. Our preliminary experimental results confirm that this is indeed a promising research direction and can motivate developing other approaches to promoting the diversity of the feature set. Future direction include more extensive experimental evaluation of different feature redundancy reduction approaches. A PROOF OF LEMMA 3 Lemma With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2), (18) where A = supx ||ϕ(x)||2. Proof. h2(x) = ( D∑ i=1 wiϕi(x) )2 ≤ ( D∑ i=1 ||w||∞ϕi(x) )2 = ||w||2∞ ( D∑ i=1 ϕi(x) )2 = ||w||2∞ (∑ i,j ϕi(x)ϕj(x) ) = ||w||2∞ ∑ i ϕi(x) 2 + ∑ i ̸=j ϕi(x)ϕj(x) (19) We have ||Φ(x)||2 ≤ A. For the first term in equation 19, we have ∑ m ϕm(x) 2 ≤ A2. By using the identity ϕm(x)ϕn(x) = 12 ( ϕm(x) 2 + ϕn(x) 2 − (ϕm(x)− ϕn(x))2 ) , the second term can be rewritten as∑ m ̸=n ϕm(x)ϕn(x) = 1 2 ∑ m̸=n ( ϕm(x) 2 + ϕn(x) 2 − ( ϕm(x)− ϕn(x) )2) . (20) In addition, we have with a probability τ , 12 ∑ m ̸=n(ϕm(x)− ϕn(x))2 ≥ ϑ2. Thus, we have with a probability at least τ :∑ m̸=n ϕm(x)ϕn(x) ≤ 1 2 (2(D − 1)A2 − 2ϑ2) = (D − 1)A2 − ϑ2. (21) By putting everything back to equation 19, we have with a probability τ , G2W (x) ≤ ||w||2∞ ( A2 + (D − 1)A2 − ϑ2 ) = ||w||2∞(DA2 − ϑ2). (22) Thus, with a probability τ , sup x,W |h(x)| ≤ √ sup x,W G2W (x) ≤ ||w||∞ √ DA2 − ϑ2. (23) B PROOF OF LEMMA 4 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (24) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h,x,y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. C PROOF OF LEMMA 5 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F) (25) Proof. Using the decomposition property of the Rademacher complexity (if ϕ is a L-Lipschitz function, then Rm(ϕ(A)) ≤ LRm(A)) and given that 12 ||. − y|| 2 is K-Lipschitz with a constant K = supx,y,h||h(x) − y|| ≤ (||w||∞ √ DA2 − ϑ2 + B), we have Rm(E) ≤ KRm(H) = (||w||∞ √ DA2 − ϑ2 + B)Rm(H), where H = {GW (x) = ∑D i=1 wiϕi(x) }. We also know that ||w||1 ≤ D||w||∞. Next, similar to the proof of Theorem 2.10 in (Wolf, 2018), we note that ∑D i=1 wiϕi(x) ∈ (D||w||∞)conv(F + −(F)) := G, where conv denotes the convex hull and F is the set of ϕ functions. Thus, Rm(H) ≤ Rm(G) = D||w||∞Rm(conv(F + (−F)) = D||w||∞Rm(F + (−F)) = 2D||w||∞Rm(F). D PROOF OF THEOREM 1 Theorem For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (26) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 4 and Lemma 5. E PROOF OF LEMMA 6 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (27) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). F PROOF OF LEMMA 7 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (28) Proof. |.| is 1-Lipschitz, Thus Rm(E) ≤ Rm(H). G PROOF OF THEOREM 2 Theorem For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (29) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 6 and Lemma 7. H PROOF OF THEOREM 3 Lemma 8. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ ||w||∞ √ DA2 − ϑ2. (30) Proof. We have sup−yGW (x) ≤ sup |GW (x)| ≤ ||w||∞ √ DA2 − ϑ2. Lemma 9. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (31) Proof. We note that for y ∈ {−1, 1}, σ and −yσ follow the same distribution. Thus, we have Rm(E) = Rm(H). Next, we note that Rm(H) ≤ 2D||w||∞Rm(F). Theorem 3 For a well-defined energy function E(h,x,y) (LeCun et al., 2006), over hypothesis class H, input set X and output set Y , if it has upper-bound M, then with a probability of at least 1− δ, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m , (32) Proof. We replace the variables in Lemma 1 using Lemma 8 and Lemma 9. I PROOF OF THEOREM 4 Lemma 10. With a probability of at least τ1τ2, we have sup x,y,h |E(h,x,y)| ≤ ( J1 + J2 ) (33) Proof. We have ||G(1)W (x) − G (2) W (y)||22 ≤ 2(||G (1) W (x)||22 + ||G (2) W (y)||22). Similar to Theorem 1, we have sup ||G(1)W (x)||22 ≤ ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) = J1 and sup ||G(2)W (y)||22 ≤ ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) = J2. We also have E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. Lemma 11. With a probability of at least τ1τ2, we have Rm(E) ≤ 4( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) (34) Proof. Let f be the square function, i.e., f(x) = 12x 2 and E0 = {G(1)W (x) − G (2) W (y) | x ∈ X , y ∈ Y}. We have E = f(E0 + (−E0)). f is Lipschitz over the input space, with a constant L bounded by supx,W G (1) W (x) + supy,W G (2) W (y) ≤ √ J1 + √ J2. Thus, we have Rm(E) ≤ ( √ J1 + √ J2)Rm(E0 + (−E0)) ≤ 2( √ J1 + √ J2)Rm(E0). Next, we note that Rm(E0) = Rm(H1 + (−H2)) = Rm(H1) + Rm(H2). Using same as technique as in Lemma 4, we have Rm(H1) ≤ 2D(1)||w(1)||∞Rm(F1) and Rm(H2) ≤ 2D(2)||w(2)||∞Rm(F2). Theorem 4 For the energy function E(h,x,y) = 12 ||G (1) W (x) − G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), G (2) W (y) =∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x ||Φ(1)(x)||2 ≤ A(1), ∀y ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) + ( J1 + J2 )√ log(2/δ) 2m , (35) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . Proof. We replace the variables in Lemma 1 using Lemma 10 and Lemma 11. J IMAGE GENERATION EXAMPLE SETTINGS AND ADDITIONAL RESULTS For the EBM model, we used a simple CNN model composed of four convolutional layers followed by a linear layer. The full CNN model is presented in Table 3. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics MCMC and a sampling buffer to accelerate training. All models were trained for 60 epochs using Adam optimizer with learning rate lr = 1e − 4 and a batch size of 128. In addition to the results presented in the paper, Figure 5 presents additional qualitative results. For the first two examples (top ones), the model is able to converge to a realistic image within reasonable amount of iterations. For the last two examples (in the bottom), we present failure cases of our approach. For these two tests, the generated image still improves over iterations. However, the model failed to converge to a clear realistic MNIST image after 256 steps.
1. What is the focus and contribution of the paper regarding energy-based models? 2. What are the strengths and weaknesses of the proposed method for feature diversity measurement? 3. Do you have any concerns or suggestions regarding the experimental setup or results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any important references missing in the paper that should be considered?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper has provided an analysis method to evaluate the feature representation of energy-based models based on feature diversity. The authors extend the probably approximately correct (PAC) theory in the view of redundancy reduction on the performance of energy-based models. Strengths And Weaknesses This paper has provided a way to identify the quantification of feature diversity by the proposed measurement method named “θ-diversity”. The authors extend this idea in PAC learning and show that reducing the redundancy of the feature set can improve the generalization of EBMs. Please refer to the following questions about my concerns. Weakness: For the definition 1, which is the major contribution of this paper, the definition also relies on a probability value τ. Therefore, the method should at least be called θ- τ (like epsilon-delta) to avoid confusion. How to define high probability in definition 1? Is it related to the distribution of x. Should this be considered in the defined boundary? What will happen if there are only a few samples that contain similar contents, where the boundary will be close to zero in most feature functions? How do we treat the effect of the distribution of data (x) on the boundary value? In formula 16, the regularization term is not just about the boundary but an integration over the entire dataset and feature set. Even the experiment could sufficiently show improvement in performance, I can’t see its relationship with the proposed diversity measurement function. In the image generation experiment (3.2), the authors only apply the proposed method on a simple dataset MINIST. There are lots of works based on EBM for image/text generation on a large dataset. The author should at least show some evaluation results to prove the effectiveness of the proposed method. In table 1, it looks like the selection of beta could affect the performance. The author should also conduct an experiment on the effectiveness of the proposed method in terms of beta. In 3.3, what are boundary settings? Writing: It would be better for the authors to reorganize the paper structure in section 1&2. The introduction of energy models applied in classification, regression, or implicit regression tasks should be put in the second part as related materials while a brief introduction about PAC learning should be put in the 1st section since it has a close relationship with the contribution. Missing important references: The description of the advance of EBMs in the first paragraph of the paper is incomplete and lacks important pioneering works. The authors randomly cite EBM works without mentioning those important ones. For example, the first paper that proposes to train a generative EBMs parameterized by a modern deep neural network and learned it by Langevin based MLE is in (Xie. ICML 2016) [1]. The first shallow EBM using Langevin for data generation can date back to 1998 in [2]. After the era of deep learning, the paradigm in [1] has been applied to videos [3], 3D voxels [4], point clouds [5] and scenes [6]. Supervised approaches might include supervised conditional learning [7], saliency prediction [8], and trajectory prediction [9]. New generative EBM frameworks also include coarse-to-fine EBM [12], CoopNet [10], CoopFlow [11], VAEBM [13]. Without knowing the history and the advance of EBMs, the contribution of the paper might be questionable. [1] A Theory of Generative ConvNet. ICML 2016. [2] Grade: Gibbs reaction and diffusion equations. ICCV 1998. [3] Synthesizing Dynamic Pattern by Spatial-Temporal Generative ConvNet. CVPR 2017. [4] Learning Descriptor Networks for 3D Shape Synthesis and Analysis. CVPR 2018. [5] Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets for 3D Generation, Reconstruction and Classification. CVPR 2021. [6] Patchwise Generative ConvNet: Training Energy-Based Models from a Single Natural Image for Internal Learning. CVPR 2021. [7] Cooperative Training of Fast Thinking Initializer and Slow Thinking Solver for Conditional Learning. TPAMI 2021. [8] Energy-Based Generative Cooperative Saliency Prediction. AAAI 2022 [9] Energy-Based Continuous Inverse Optimal Control. TNNLS 2022 [10] Cooperative Training of Descriptor and Generator Networks. PAMI 2018 [11] A Tale of Two Flows: Cooperative Learning of Langevin Flow and Normalizing Flow Toward Energy-Based Model. ICLR 2022. [12] Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling. ICLR 2021. [13] VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models. ICLR 2021. Clarity, Quality, Novelty And Reproducibility Clarity and Quality: The materials in section 1&2 need to be re-organized. Novelty: Moderately. Reproducibility: Should be easy to reproduce.
ICLR
Title On Feature Diversity in Energy-based Models Abstract Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches. An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration. In this paper, we focus on the diversity of the produced feature set. We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs. We derive generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we show that indeed reducing redundancy of the feature set can consistently decrease the gap between the true and empirical expectation of the energy and boosts the performance of the model. 1 INTRODUCTION The energy-based learning paradigm was first proposed by Zhu & Mumford (1998); LeCun et al. (2006) as an alternative to probabilistic graphical models (Koller & Friedman, 2009). As their name suggests, energy-based models (EBMs) map each input ‘configuration’ to a single scalar, called the ‘energy’. In the learning phase, the parameters of the model are optimized by associating the desired configurations with small energy values and the undesired ones with higher energy values (Kumar et al., 2019; Song & Ermon, 2019; Yang et al., 2016). In the inference phase, given an incomplete input configuration, the energy surface is explored to find the remaining variables which yield the lowest energy. EBMs encapsulate solutions to several supervised approaches (LeCun et al., 2006; Fang & Liu, 2016) and unsupervised learning problems (Deng et al., 2020; Bakhtin et al., 2021; Zhao et al., 2020; Xu et al., 2022) and provide a common theoretical framework for many learning models, including traditional discriminative (Zhai et al., 2016; Li et al., 2020) and generative (Zhu & Mumford, 1998; Xie et al., 2017b; Zhao et al., 2017; Che et al., 2020; Khalifa et al., 2021) approaches. Formally, let us denote the energy function by E(h,x,y), where h = GW (x) represents the model with parameters W to be optimized during training and x,y are sets of variables. Figure 1 illustrates how classification, regression, and implicit regression can be expressed as EBMs. In Figure 1 (a), a regression scenario is presented. The input x, e.g., an image, is transformed using an inner model GW (x) and its distance, to the second input y is computed yielding the energy function. A valid energy function in this case can be the L1 or the L2 distance. In the binary classification case (Figure 1 (b)), the energy can be defined as E(h,x,y) = −yGW (x) . In the implicit regression case (Figure 1 (c)), we have two inner models and the energy can be defined as the L2 distance between their outputs E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. In the inference phase, given an input x, the label y∗ can be obtained by solving the following optimization problem: y∗ = argmin y E(h,x,y). (1) An EBM typically relies on an inner model, i.e., GW (x), to generate the desired energy landscape (LeCun et al., 2006). Depending on the problem at hand, this function can be constructed as a linear projection, a kernel method, or a neural network and its parameters are optimized in a data-driven manner in the training phase. Formally, GW (x) can be written as GW (x) = D∑ i wiϕi(x), (2) where {ϕ1(·), · · · , ϕD(·)} is the feature set, which can be hand-crafted, separately trained from unlabeled data (Zhang & LeCun, 2017), or modeled by a neural network and optimized in the training phase of the EBM model (Xie et al., 2016; Yu et al., 2020; Xie et al., 2021). In the rest of the paper, we assume that the inner models GW defined in the energy-based learning system (Figure 1) are obtained as a weighted sum of different features as expressed in equation 2. In (Zhang, 2013), it was shown that simply minimizing the empirical energy over the training data does not theoretically guarantee the minimization of the expected value of the true energy. Thus, developing and motivating novel regularization techniques is required (Zhang & LeCun, 2017). We argue that the quality of the feature set {ϕ1(·), · · · , ϕD(·)} plays a critical role in the overall performance of the global model. In this work, we extend the theoretical analysis of (Zhang, 2013) and focus on the ‘diversity’ of this set and its effect on the generalization ability of the EBM models. Intuitively, it is clear that a less correlated set of intermediate representations is richer and thus able to capture more complex patterns in the input. Thus, it is important to avoid redundant features for achieving a better performance. However, a theoretical analysis is missing. We start by quantifying the diversity of a set of feature functions. To this end, we introduce ϑ− τ -diversity: Definition 1 ((ϑ− τ )-diversity). A set of feature functions, {ϕ1(·), · · · , ϕD(·)} is called ϑ-diverse, if there exists a constant ϑ ∈ R, such that for every input x we have 1 2 D∑ i ̸=j (ϕi(x)− ϕj(x))2 ≥ ϑ2 (3) with a high probability τ . Intuitively, if two feature maps ϕi(·) and ϕj(·) are non-redundant, they have different outputs for the same input with a high probability. However, if, for example, the features are extracted using a neural network with a ReLU activation function, there is a high probability that some of the features associated with the input will be zero. Thus, defining a lower bound for the pair-wise diversity directly is impractical. Therefore, we quantify diversity as the lower-bound over the sum of the pair-wise distances of the feature maps as expressed in equation 3 and ϑ measures the diversity of a set. In machine learning context, diversity has been explored in ensemble learning (Li et al., 2012; Yu et al., 2011; Li et al., 2017), sampling (Derezinski et al., 2019; Bıyık et al., 2019), ranking (Wu et al., 2019; Qin & Zhu, 2013), pruning (Singh et al., 2020; Lee et al., 2020), and neural networks (Xie et al., 2015; Shen et al., 2021). In Xie et al. (2015; 2017a), it was shown theoretically and experimentally that avoiding redundancy over the weights of a neural network using the mutual angles as a diversity measure improves the generalization ability of the model. In this work, we explore a new line of research, where diversity is defined over the feature maps directly, using the (ϑ− τ )-diversity, in the context of energy-based learning. In (Zhao et al., 2017), a similar idea was empirically explored. A “repelling regularizer” was proposed to force non-redundant or orthogonal feature representations. Moreover, the idea of learning while avoiding redundancy has been used recently in the context of semi-supervised learning (Zbontar et al., 2021; Bardes et al., 2021). Reducing redundancy by minimizing the cross-correlation of features learned using a Siamese network (Zbontar et al., 2021) was empirically shown to improve the generalization ability, yet a theoretical analysis to prove this has so far been lacking. In this paper, we close the gap between empirical experience and theory. We theoretically study the generalization ability of EBMs in different learning contexts, i.e., regression, classification, implicit regression, and we derive new generalization bounds using the (ϑ−τ )-diversity providing theoretical guarantees that avoiding redundancy indeed improves the generalization ability of the model. The contributions of this paper can be summarized as follows: • We explore a new line of research, where diversity is defined over the features representing the input data and not over the model’s parameters. To this end, we introduce (ϑ − τ )- diversity as a quantification of the diversity of a given feature set. • We extend the theoretical analysis (Zhang, 2013) and study the effect of avoiding redundancy of a feature set on the generalization of EBMs (Lemmas 3 to 7 and Theorem 1 to 5). • We derive bounds for the expectation of the true energy in different learning contexts, i.e., regression, classification, and implicit regression, using different energy functions. Our analysis consistently shows that avoiding redundancy by increasing the diversity of the feature set can boost the performance of an EBM. 2 PAC-LEARNING OF EBMS WITH (ϑ− τ )-DIVERSITY In this section, we derive a qualitative justification for (ϑ−τ )-diversity using probably approximately correct (PAC) learning (Valiant, 1984; Mohri et al., 2018; Li et al., 2019). The PAC-based theory for standard EBMs has been established in (Zhang, 2013). First, we start by defining Rademacher complexity: Definition 2. (Bartlett & Mendelson, 2002; Mohri et al., 2018) For a given dataset with m samples S = {xi, yi}mi=1 from a distribution D and for a model space F : X → R with a single dimensional output, the Empirical Rademacher complexity R̂m(F) of the set F is defined as follows: R̂m(F) = Eσ [ sup f∈F 1 m m∑ i=1 σif(xi) ] , (4) where the Rademacher variables σ = {σ1, · · · , σm} are independent uniform random variables in {−1, 1}. The Rademacher complexity Rm(F) is defined as the expectation of the Empirical Rademacher complexity over training set, i.e., Rm(F) = ES∼Dm [R̂m(F)]. Based on this quantity, (Bartlett & Mendelson, 2002), several learning guarantees for EBMs have been shown (Zhang, 2013). We recall the following two lemmas related to the estimation error and the Rademacher complexity. In Lemma 2, we present the principal PAC-learning bound for energy functions with finite outputs. Lemma 1. (Wolf, 2018) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. We have Rm(A) ≤ LgRm(F). (5) Lemma 2. (Zhang, 2013) For a well-defined energy function E(h,x,y) over hypothesis class H, input set X and output set Y (LeCun et al., 2006), the following holds for all h in H with a probability of at least 1− δ E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 2Rm(E) +M √ log(2/δ) 2m , (6) where E is the energy function class defined as E = {E(h,x,y)|h ∈ H}, Rm(E) is its Rademacher complexity, and M is the upper bound of E . Lemma 2 provides a generalization bound for EBMs with well-defined (non-negative) and bounded energy. The expected energy is bounded using the sum of three terms: The first term is the empirical expectation of energy over the training data, the second term depends on the Rademacher complexity of the energy class, and the third term involves the number of the training data m and the upperbound of the energy function M . This shows that merely minimizing the empirical expectation of energy, i.e., the first term, may not yield a good approximation of the true expectation. In (Zhang & LeCun, 2017), it has been shown that regularization using unlabeled data reduces the second and third terms leading to better generalization. In this work, we express these two terms using the (ϑ− τ )-diversity and show that employing a diversity strategy may also decrease the gap between the true and empirical expectation of the energy. In Section 2.1, we consider the special case of regression and derive two bounds for two energy functions based on L1 and L2 distances. In Section 2.2, we derive a bound for the binary classification task using as energy function E(h,x,y) = −yGW (x) (LeCun et al., 2006). In Section 2.3, we consider the case of implicit regression, which encapsulates different learning problems such as metric learning, generative models, and denoising (LeCun et al., 2006). For this case, we use the L2 distance between the inner models as the energy function. In the rest of the paper, we denote the generalization gap, E(x,y)∼D[E(h,x,y)]− 1m ∑ (x,y)∈S E(h,x,y) by ∆D,SE. All the proofs are presented in the supplementary material. 2.1 REGRESSION TASK Regression can be formulated as an energy-based learning problem (Figure 1 (a)) using the inner model h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x). We assume that the feature set is positive and well-defined over the input domain X , i.e., ∀x ∈ X : ||Φ(x)||2 ≤ A, the hypothesis class can be defined as follows: H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, the output set Y ⊂ R is bounded, i.e., y < B, and the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ . The two valid energy functions which can be used for regression are E2(h,x,y) = 12 ||GW (x)−y|| 2 2 and E1(h,x,y) = ||GW (x)−y||1 (LeCun et al., 2006). We study these two cases separately and we show theoretically that for both energy functions avoiding redundancy improves generalization of the EBM model. ENERGY FUNCTION: E2 In this subsection, we present our theoretical analysis on the effect of diversity on the generalization ability of an EBM defined with the energy function E2(h,x,y) = 12 ||GW (x) − y|| 2 2. We start by the following two Lemmas 3 and 4. Lemma 3. With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2). (7) Lemma 4. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (8) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h, x, y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. Lemmas 3 and 4 bound the supremum of the output of the inner model and the energy function as a function of ϑ, respectively. As it can been seen, both terms are decreasing with respect to diversity. Next, we bound the Rademacher complexity of the energy class, i.e., Rm(E). Lemma 5. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F). (9) Lemma 5 expresses the bound of the Rademacher complexity of the energy class using the diversity constant and the Rademacher complexity of the features. Having expressed the different terms of Lemma 2 using diversity, we now present our main result for an energy-basel model trained defined using E2. The main result is presented in Theorem 1. Theorem 1. For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (10) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Theorem 1 express the special case of Lemma 2 using the (ϑ − τ )-diversity of the feature set {ϕ1(·), · · · , ϕD(·)}. As it can been seen, the bound of the generalization error is inversely proportional to ϑ2. This theoretically shows that reducing redundancy, i.e., increasing ϑ, reduces the gap between the true and the empirical energies and improves the generalization performance of the EBMs. ENERGY FUNCTION: E1 In this subsection, we consider the second case of regression using the energy function E1(h,x,y) = ||GW (x) − y||1. Similar to the previous case, we start by deriving bounds for the energy function and the Rademacher complexity of the class using diversity in Lemmas 6 and 7. Lemma 6. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (11) Lemma 7. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F). (12) Next, we derive the main result of the generalization of the EBMs defined using the energy function E1. The main finding is presented in Theorem 2. Theorem 2. For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (13) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Similar to Theorem 1, in Theorem 2, we consistently find that the bound of the true expectation of the energy is a decreasing function with respect to ϑ. This proves that for the regression task reducing redundancy can improve the generalization performance of the energy-based model. 2.2 BINARY CLASSIFIER Here, we consider the problem of binary classification, as illustrated in Figure 1 (b). Using the same assumption as in regression for the inner model, i.e., h(x) = GW (x) = ∑D i=1 wiϕi(x) = wTΦ(x), energy function of E(h,x,y) = −yGW (x) (LeCun et al., 2006), and the (ϑ−τ )-diversity of the feature set, we express Lemma 2 for this specific configuration in Theorem 3. Theorem 3. For the energy function E(h,x,y) = −yGW (x), over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m . (14) Similar to the regression task, we note that the upper-bound of the true expectation is a decreasing function with respect to the diversity term. Thus, a less redundant feature set, i.e., higher ϑ, has a lower upper-bound for the true energy. 2.3 IMPLICIT REGRESSION In this section, we consider the problem of implicit regression. This is a general formulation of a different set of problems such as metric learning, where the goal is to learn a distance function between two domains, image denoising, object detection as illustrated in (LeCun et al., 2006), or semi-supervised learning (Zbontar et al., 2021). This form of EBM (Figure 1 (c)) has two inner models, G1W (·) and G2W (·), which can be equal or different according to the problem at hand. Here, we consider the general case, where the two models correspond to two different combinations of different features, i.e., G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) and G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y). Thus, we have a different (ϑ− τ )-diversity term for each set. The final result is presented in Theorem 4. Theorem 4. For the energy function E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {h(1)(x) = G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), h(2)(x) = G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x : ||Φ(1)(x)||2 ≤ A(1), ∀y : ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H: ∆D,SE ≤ 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) +(J1 + J2) √ log(2/δ) 2m , (15) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . The upper-bound of the energy model depends on the diversity variable of both feature sets. Moreover, we note that the bound for the implicit regression decreases proportionally to ϑ2, as opposed to the classification case for example, where the bound is proportional to ϑ. Thus, we can conclude that reducing redundancy improves the generalization of EBM in the implicit regression context. 2.4 GENERAL DISCUSSION We note that the theory developed in our paper (Theorems 1 to 4) is agnostic to the loss function (LeCun et al., 2006) or the optimization strategy used (Kumar et al., 2019; Song & Ermon, 2019; Yu et al., 2020; Xu et al., 2022). We show that reducing the redundancy of the features consistently decreases the upper-bound of the true expectation of the energy and, thus, can boost the generalization performance of the energy-based model. It also should be noted that A, i.e., the upper bound of the features and ϑ are connected. But our findings can be interpreted as follows: given two models with the same value of A (maximum L2norm of the features), the model with higher diversity ϑ has a lower generalization bound and is likely to generalize better. We note that our analysis is independent of how the features are obtained, e.g., handcrafted or optimized. In fact, in the recent state-of-the-art EBMs (Khalifa et al., 2021; Bakhtin et al., 2021; Yu et al., 2020), the features are typically parameterized using a deep learning model and optimized during training. Our contribution is twofold. First, we provide theoretical guarantees that reducing redundancy in the feature space can indeed improve the generalization of the EBM. This can pave the way toward providing theoretical guarantees for WORKS ON SELF-SUPERVISED LEARNING using redundancy reduction Zbontar et al. (2021); Bardes et al. (2021); Zhao et al. (2017). Second, our theory can be used to motivate novel redundancy reduction strategies, for example, in the form of regularization, to avoid learning redundant features. Such strategies can improve the performance of the model and improve generalization. 3 SIMPLE REGULARIZATION ALGORITHM In general, theoretical generalization bounds can be too loose to be direct practical implications (Zhang et al., 2017; Neyshabur et al., 2017). However, they typically suggest a regularizer to promote some desired aspects of the hypothesis class (Xie et al., 2015; Li et al., 2019; Kawaguchi et al., 2017). Accordingly, inspired by the theoretical analysis in Section 2, we propose a straightforward strategy to avoid learning redundant features by regularizing the model during the training using a term inversely proportional to ϑ− τ -diversity of the features. Given an EBM model with a learnable feature set {ϕ1(·), · · · , ϕD(·)} and a training set S, we propose to augment the original training loss L as follows: Laug = L− β ∑ x∈S D∑ i̸=j (ϕi(x)− ϕj(x))2, (16) where β is a hyper-parameter controlling the contribution of the second term in the total loss. The additional term penalizes the similarities between the distinct features ensuring learning a diverse and non-redundant mapping of the data. As a result, this can improve the general performance of our model. 3.1 TOY EXAMPLE We test our regularization strategy first using a toy data. We use an EBM model to learn the distribution of a 2-D Swiss roll illustrated in Figure 2 (a). For the EBM, we use a fully connected neural network composed of two intermediate layers with 1000 units and ReLu activations. We train the models using Stochastic Gradient Langevin Dynamics (SGLD) sampling and the contrastive divergence-like algorithm proposed in (Du & Mordatch, 2019). The total objective of the standard EBM is expressed as follows: L = 1 N ∑ n ( α ( E(x+n ) 2 + E(x−n ) 2) + E(x+n )− E(x−n ) ) , (17) where x+n denote positive samples and x − n negative samples. We augment this loss using equation 16, i.e., the features are the latent representations obtained at the last intermediate layer. The distribution learned using both the standard and the proposed approach are illustrated using the kernel density estimation (Terrell & Scott, 1992) in Figure 2. As it can be seen, avoiding redundancy boosts the performance of the EBM model. Indeed, by comparing the two learned distributions, the EBM trained with our approach led to a better approximation of the ground-truth distribution and was able to better capture the tail of the distribution as opposed to the original EBM. 3.2 IMAGE GENERATION EXAMPLE Recently, there has been a high interest in using EBMs to solve image/text generation tasks Du & Mordatch (2019); Du et al. (2021); Khalifa et al. (2021); Deng et al. (2020). In this subsection, we validate the proposed regularizer on the simple example of MNIST digits image generation, as in (Du & Mordatch, 2019). For the EBM model, we use a simple CNN model composed of four convolutional layers followed by a linear layer. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics Markov chain Monte Carlo (MCMC) and a sampling buffer to accelerate training. The full details are available in the supplementary material. In this example, the features, i.e., the latent representation obtained at the last intermediate layer, are learned in an end-to-end way. We evaluate the performance of our approach by augmenting the contrastive divergence loss using equation 16 to penalize the feature redundancy. We quantitatively evaluate image quality of EBMs with ‘Fréchet Inception Distance’ (FID) score (Heusel et al., 2017) and the negative log-likelihood (NLL) loss in Table 1 for different values of β. We note that we obtain consistently better FID and NLL scores by penalizing the similarity of the learned features. The best performance is achieved by β = 1e−13, which yields more than 10%, in terms of FID, improvement compared to the original EBM model. To gain insights into the visual performance of our approach, we plot a few intermediate samples of the MCMC sampling (Langevin Dynamics). The results obtained by the EBM with β = 1e−13 are presented in Figure 3. Initiating from random noise, MCMC obtains reasonable figures after only 64 steps. The digits get clearer and more realistic over the iterations. More results are presented in the supplementary material. 3.3 CONTINUAL LEARNING EXAMPLE In this subsection, we validate the proposed regularizer on the Continual Learning (CL) problem. CL tackles the problem of catastrophic forgetting in deep learning models (Parisi et al., 2019; Li & Hoiem, 2017; Shibata et al., 2021). Its main goal is to solve several tasks sequentially without forgetting knowledge learned from the past. So, a continual learner is expected to learn a new task, crucially, without forgetting previous tasks. Recently, an EBM-based CL approach was proposed in (Li et al., 2020) and led to superior results compared to standard approaches. We use the same models and the same experimental protocol used in (Li et al., 2020). However, here we focus only on the class-incremental learning task using CIFAR10 and CIFAR100. We evaluate the performance of our proposed regularizer using both the boundary-aware and boundary-agnostic settings. As defined in (Li et al., 2020), the boundary-aware refers to the situation where the sequence of the tasks has explicit separation between them which is known to the model. The boundary agnostic case refers to the situation where the data distributions gradually changes without a notion of task boundaries. Similar to Section 3.2, we consider as ’features’ the representation obtained by the last intermediate layer. The proposed regularizer is applied on top of this representation. In Table 2, we report the performance of the EBM trained using the original loss and using the loss augmented with our additional term for different values of β. As shown in Table 2, penalizing feature similarity and promoting the diversity of the feature set boosts the performance of the EBM model and consistently leads to a superior accuracy for both datasets. In Figure 4, we display the accumulated classification accuracy, averaged over tasks, on the test set. Along the five tasks, our approach maintains higher classification accuracy than the standard EBM for both the boundary-aware and boundary-agnostic settings. 4 CONCLUSION Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative systems. An EBM is typically formed of one (or many) inner models which learn a combination of different features to generate an energy mapping for each input configuration. In this paper, we introduced a feature diversity concept, i.e., (ϑ − τ )-diversity, and we used it to extend the PAC theory of EBMs. We derived different generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we consistently found that reducing the redundancy of the feature set can improve the generalization error of energy-based approaches. We also note that our theory is independent of the loss function or the training strategy used to optimize the parameters of the EBM. This provides theoretical guarantees on learning via feature redundancy reduction. Our preliminary experimental results confirm that this is indeed a promising research direction and can motivate developing other approaches to promoting the diversity of the feature set. Future direction include more extensive experimental evaluation of different feature redundancy reduction approaches. A PROOF OF LEMMA 3 Lemma With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2), (18) where A = supx ||ϕ(x)||2. Proof. h2(x) = ( D∑ i=1 wiϕi(x) )2 ≤ ( D∑ i=1 ||w||∞ϕi(x) )2 = ||w||2∞ ( D∑ i=1 ϕi(x) )2 = ||w||2∞ (∑ i,j ϕi(x)ϕj(x) ) = ||w||2∞ ∑ i ϕi(x) 2 + ∑ i ̸=j ϕi(x)ϕj(x) (19) We have ||Φ(x)||2 ≤ A. For the first term in equation 19, we have ∑ m ϕm(x) 2 ≤ A2. By using the identity ϕm(x)ϕn(x) = 12 ( ϕm(x) 2 + ϕn(x) 2 − (ϕm(x)− ϕn(x))2 ) , the second term can be rewritten as∑ m ̸=n ϕm(x)ϕn(x) = 1 2 ∑ m̸=n ( ϕm(x) 2 + ϕn(x) 2 − ( ϕm(x)− ϕn(x) )2) . (20) In addition, we have with a probability τ , 12 ∑ m ̸=n(ϕm(x)− ϕn(x))2 ≥ ϑ2. Thus, we have with a probability at least τ :∑ m̸=n ϕm(x)ϕn(x) ≤ 1 2 (2(D − 1)A2 − 2ϑ2) = (D − 1)A2 − ϑ2. (21) By putting everything back to equation 19, we have with a probability τ , G2W (x) ≤ ||w||2∞ ( A2 + (D − 1)A2 − ϑ2 ) = ||w||2∞(DA2 − ϑ2). (22) Thus, with a probability τ , sup x,W |h(x)| ≤ √ sup x,W G2W (x) ≤ ||w||∞ √ DA2 − ϑ2. (23) B PROOF OF LEMMA 4 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (24) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h,x,y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. C PROOF OF LEMMA 5 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F) (25) Proof. Using the decomposition property of the Rademacher complexity (if ϕ is a L-Lipschitz function, then Rm(ϕ(A)) ≤ LRm(A)) and given that 12 ||. − y|| 2 is K-Lipschitz with a constant K = supx,y,h||h(x) − y|| ≤ (||w||∞ √ DA2 − ϑ2 + B), we have Rm(E) ≤ KRm(H) = (||w||∞ √ DA2 − ϑ2 + B)Rm(H), where H = {GW (x) = ∑D i=1 wiϕi(x) }. We also know that ||w||1 ≤ D||w||∞. Next, similar to the proof of Theorem 2.10 in (Wolf, 2018), we note that ∑D i=1 wiϕi(x) ∈ (D||w||∞)conv(F + −(F)) := G, where conv denotes the convex hull and F is the set of ϕ functions. Thus, Rm(H) ≤ Rm(G) = D||w||∞Rm(conv(F + (−F)) = D||w||∞Rm(F + (−F)) = 2D||w||∞Rm(F). D PROOF OF THEOREM 1 Theorem For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (26) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 4 and Lemma 5. E PROOF OF LEMMA 6 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (27) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). F PROOF OF LEMMA 7 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (28) Proof. |.| is 1-Lipschitz, Thus Rm(E) ≤ Rm(H). G PROOF OF THEOREM 2 Theorem For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (29) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 6 and Lemma 7. H PROOF OF THEOREM 3 Lemma 8. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ ||w||∞ √ DA2 − ϑ2. (30) Proof. We have sup−yGW (x) ≤ sup |GW (x)| ≤ ||w||∞ √ DA2 − ϑ2. Lemma 9. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (31) Proof. We note that for y ∈ {−1, 1}, σ and −yσ follow the same distribution. Thus, we have Rm(E) = Rm(H). Next, we note that Rm(H) ≤ 2D||w||∞Rm(F). Theorem 3 For a well-defined energy function E(h,x,y) (LeCun et al., 2006), over hypothesis class H, input set X and output set Y , if it has upper-bound M, then with a probability of at least 1− δ, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m , (32) Proof. We replace the variables in Lemma 1 using Lemma 8 and Lemma 9. I PROOF OF THEOREM 4 Lemma 10. With a probability of at least τ1τ2, we have sup x,y,h |E(h,x,y)| ≤ ( J1 + J2 ) (33) Proof. We have ||G(1)W (x) − G (2) W (y)||22 ≤ 2(||G (1) W (x)||22 + ||G (2) W (y)||22). Similar to Theorem 1, we have sup ||G(1)W (x)||22 ≤ ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) = J1 and sup ||G(2)W (y)||22 ≤ ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) = J2. We also have E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. Lemma 11. With a probability of at least τ1τ2, we have Rm(E) ≤ 4( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) (34) Proof. Let f be the square function, i.e., f(x) = 12x 2 and E0 = {G(1)W (x) − G (2) W (y) | x ∈ X , y ∈ Y}. We have E = f(E0 + (−E0)). f is Lipschitz over the input space, with a constant L bounded by supx,W G (1) W (x) + supy,W G (2) W (y) ≤ √ J1 + √ J2. Thus, we have Rm(E) ≤ ( √ J1 + √ J2)Rm(E0 + (−E0)) ≤ 2( √ J1 + √ J2)Rm(E0). Next, we note that Rm(E0) = Rm(H1 + (−H2)) = Rm(H1) + Rm(H2). Using same as technique as in Lemma 4, we have Rm(H1) ≤ 2D(1)||w(1)||∞Rm(F1) and Rm(H2) ≤ 2D(2)||w(2)||∞Rm(F2). Theorem 4 For the energy function E(h,x,y) = 12 ||G (1) W (x) − G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), G (2) W (y) =∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x ||Φ(1)(x)||2 ≤ A(1), ∀y ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) + ( J1 + J2 )√ log(2/δ) 2m , (35) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . Proof. We replace the variables in Lemma 1 using Lemma 10 and Lemma 11. J IMAGE GENERATION EXAMPLE SETTINGS AND ADDITIONAL RESULTS For the EBM model, we used a simple CNN model composed of four convolutional layers followed by a linear layer. The full CNN model is presented in Table 3. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics MCMC and a sampling buffer to accelerate training. All models were trained for 60 epochs using Adam optimizer with learning rate lr = 1e − 4 and a batch size of 128. In addition to the results presented in the paper, Figure 5 presents additional qualitative results. For the first two examples (top ones), the model is able to converge to a realistic image within reasonable amount of iterations. For the last two examples (in the bottom), we present failure cases of our approach. For these two tests, the generated image still improves over iterations. However, the model failed to converge to a clear realistic MNIST image after 256 steps.
1. What is the main contribution of the paper regarding energy-based models? 2. What are the strengths and weaknesses of the proposed regularizer term? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's experimental section? 5. Can the authors provide more explicit links between their approach and related work in self-supervised learning?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper provides a new regulariser term for training energy-based models, which promotes feature diversity. A theoretical analysis on the generalisation performance of energy-based models using the PAC-learning framework gives a solid motivating evidence on the need of the regularizer. Specifically, the authors consider an existing generalisation bound for energy-models (where the difference between true and empirical averaged energy score is upper bounded by two terms, namely the Rademacher complexity of the energy function and its supremum value) and extend the theory by expressing the two terms in the bound as a function of the parameter involved in the new regulariser term. The analysis is carried on three different kinds of energy functions for the purposes of regression, binary classification and implicit regression. Experiments are performed: On a synthetic dataset showing the benefits of including the regulariser over unregularised approaches. On a MNIST task for implicit density estimation, thus evaluating generative and log-likelihood performance. On a continual learning task on CIFAR10 and CIFAR100, thus providing evidence on the improved predictive performance. Strengths And Weaknesses Strenghts Regularization in energy-based models is an important and relevant problem. The proposed regulariser is to my knowledge new and can potentially link to recent work on redundancy reduction criteria used in self-supervised learning. The theory for the regulariser is simple and at the same time elegant and it provides a solid motivation on the need of the regulariser. Additionally, I think that the analysis can trigger new discussion in the community of energy-based models and inspire new regularising approaches. The paper is well-written and clear. I enjoyed reading it. Weaknesses The claim that “the theory developed in our paper is agnostic to the loss function” is not correct (this appears in several parts of the paper). Indeed, note that the contributing term on the Rademacher complexity of the energy function in Theorem 2 (regression, using L 1 norm energy score) and in Theorem 3 (binary classification, using a cross-entropy-like energy score) doesn’t depend on the diversity parameter from the regulariser. Consequently, minimising the proposed regulariser doesn’t guarantee a reduction between true and empirical estimate of the average energy score for those two cases. On the contrary, it seems that the generalisation performance depend on the definition of the energy score function. Can you be more precise and elaborate on this aspect? The claim that “we provide theoretical guarantees for WORKS ON SELF-SUPERVISED LEARNING showing that reducing redundancy in the feature space can indeed improve the generalisation of the EBMs” is not precise. Indeed, note that the link between feature diversity and the objectives considered by negative-free approaches in self-supervised learning is not clear yet and it should be made more explicit. The latter approaches typically attempt to increase the correlation between representations of different views of the same data, while reducing their redundancy. It has been shown that this can be related to the principle of information bottleneck. Would it possible to make a clear link between the proposed diversity regulariser and the information bottleneck? This would strenghten the value of the proposed theory. Code is not available Minor questions In proof of Lemma 4, shouldn’t you consider using the L 2 norm in order to be consistent with the definition of the energy function? The range of values for the hyper parameter beta in the regulariser is weird. Do you have an intuition on why you consider such small values? Should this range be considered also in other tasks)? Clarity, Quality, Novelty And Reproducibility Clarity The paper is clear and well-written. I enjoyed the reading. Quality The methodology used in the theoretical analysis is correct and the results are sound. However, some of the claims are not well supported. The evaluation methodology is also technically correct. Novelty The paper proposes a new regulariser to promote feature diversity in energy-based models and an accompanying theory motivating its need. The theory sheds new lights on the generalisation performance of energy-based models and it opens up to new directions for regularising such models. Additionally, it can potentially connect to recent work using the principle of redundancy minimisation in self-supervised learning. Reproducibility Code is not available.
ICLR
Title On Feature Diversity in Energy-based Models Abstract Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches. An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration. In this paper, we focus on the diversity of the produced feature set. We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs. We derive generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we show that indeed reducing redundancy of the feature set can consistently decrease the gap between the true and empirical expectation of the energy and boosts the performance of the model. 1 INTRODUCTION The energy-based learning paradigm was first proposed by Zhu & Mumford (1998); LeCun et al. (2006) as an alternative to probabilistic graphical models (Koller & Friedman, 2009). As their name suggests, energy-based models (EBMs) map each input ‘configuration’ to a single scalar, called the ‘energy’. In the learning phase, the parameters of the model are optimized by associating the desired configurations with small energy values and the undesired ones with higher energy values (Kumar et al., 2019; Song & Ermon, 2019; Yang et al., 2016). In the inference phase, given an incomplete input configuration, the energy surface is explored to find the remaining variables which yield the lowest energy. EBMs encapsulate solutions to several supervised approaches (LeCun et al., 2006; Fang & Liu, 2016) and unsupervised learning problems (Deng et al., 2020; Bakhtin et al., 2021; Zhao et al., 2020; Xu et al., 2022) and provide a common theoretical framework for many learning models, including traditional discriminative (Zhai et al., 2016; Li et al., 2020) and generative (Zhu & Mumford, 1998; Xie et al., 2017b; Zhao et al., 2017; Che et al., 2020; Khalifa et al., 2021) approaches. Formally, let us denote the energy function by E(h,x,y), where h = GW (x) represents the model with parameters W to be optimized during training and x,y are sets of variables. Figure 1 illustrates how classification, regression, and implicit regression can be expressed as EBMs. In Figure 1 (a), a regression scenario is presented. The input x, e.g., an image, is transformed using an inner model GW (x) and its distance, to the second input y is computed yielding the energy function. A valid energy function in this case can be the L1 or the L2 distance. In the binary classification case (Figure 1 (b)), the energy can be defined as E(h,x,y) = −yGW (x) . In the implicit regression case (Figure 1 (c)), we have two inner models and the energy can be defined as the L2 distance between their outputs E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. In the inference phase, given an input x, the label y∗ can be obtained by solving the following optimization problem: y∗ = argmin y E(h,x,y). (1) An EBM typically relies on an inner model, i.e., GW (x), to generate the desired energy landscape (LeCun et al., 2006). Depending on the problem at hand, this function can be constructed as a linear projection, a kernel method, or a neural network and its parameters are optimized in a data-driven manner in the training phase. Formally, GW (x) can be written as GW (x) = D∑ i wiϕi(x), (2) where {ϕ1(·), · · · , ϕD(·)} is the feature set, which can be hand-crafted, separately trained from unlabeled data (Zhang & LeCun, 2017), or modeled by a neural network and optimized in the training phase of the EBM model (Xie et al., 2016; Yu et al., 2020; Xie et al., 2021). In the rest of the paper, we assume that the inner models GW defined in the energy-based learning system (Figure 1) are obtained as a weighted sum of different features as expressed in equation 2. In (Zhang, 2013), it was shown that simply minimizing the empirical energy over the training data does not theoretically guarantee the minimization of the expected value of the true energy. Thus, developing and motivating novel regularization techniques is required (Zhang & LeCun, 2017). We argue that the quality of the feature set {ϕ1(·), · · · , ϕD(·)} plays a critical role in the overall performance of the global model. In this work, we extend the theoretical analysis of (Zhang, 2013) and focus on the ‘diversity’ of this set and its effect on the generalization ability of the EBM models. Intuitively, it is clear that a less correlated set of intermediate representations is richer and thus able to capture more complex patterns in the input. Thus, it is important to avoid redundant features for achieving a better performance. However, a theoretical analysis is missing. We start by quantifying the diversity of a set of feature functions. To this end, we introduce ϑ− τ -diversity: Definition 1 ((ϑ− τ )-diversity). A set of feature functions, {ϕ1(·), · · · , ϕD(·)} is called ϑ-diverse, if there exists a constant ϑ ∈ R, such that for every input x we have 1 2 D∑ i ̸=j (ϕi(x)− ϕj(x))2 ≥ ϑ2 (3) with a high probability τ . Intuitively, if two feature maps ϕi(·) and ϕj(·) are non-redundant, they have different outputs for the same input with a high probability. However, if, for example, the features are extracted using a neural network with a ReLU activation function, there is a high probability that some of the features associated with the input will be zero. Thus, defining a lower bound for the pair-wise diversity directly is impractical. Therefore, we quantify diversity as the lower-bound over the sum of the pair-wise distances of the feature maps as expressed in equation 3 and ϑ measures the diversity of a set. In machine learning context, diversity has been explored in ensemble learning (Li et al., 2012; Yu et al., 2011; Li et al., 2017), sampling (Derezinski et al., 2019; Bıyık et al., 2019), ranking (Wu et al., 2019; Qin & Zhu, 2013), pruning (Singh et al., 2020; Lee et al., 2020), and neural networks (Xie et al., 2015; Shen et al., 2021). In Xie et al. (2015; 2017a), it was shown theoretically and experimentally that avoiding redundancy over the weights of a neural network using the mutual angles as a diversity measure improves the generalization ability of the model. In this work, we explore a new line of research, where diversity is defined over the feature maps directly, using the (ϑ− τ )-diversity, in the context of energy-based learning. In (Zhao et al., 2017), a similar idea was empirically explored. A “repelling regularizer” was proposed to force non-redundant or orthogonal feature representations. Moreover, the idea of learning while avoiding redundancy has been used recently in the context of semi-supervised learning (Zbontar et al., 2021; Bardes et al., 2021). Reducing redundancy by minimizing the cross-correlation of features learned using a Siamese network (Zbontar et al., 2021) was empirically shown to improve the generalization ability, yet a theoretical analysis to prove this has so far been lacking. In this paper, we close the gap between empirical experience and theory. We theoretically study the generalization ability of EBMs in different learning contexts, i.e., regression, classification, implicit regression, and we derive new generalization bounds using the (ϑ−τ )-diversity providing theoretical guarantees that avoiding redundancy indeed improves the generalization ability of the model. The contributions of this paper can be summarized as follows: • We explore a new line of research, where diversity is defined over the features representing the input data and not over the model’s parameters. To this end, we introduce (ϑ − τ )- diversity as a quantification of the diversity of a given feature set. • We extend the theoretical analysis (Zhang, 2013) and study the effect of avoiding redundancy of a feature set on the generalization of EBMs (Lemmas 3 to 7 and Theorem 1 to 5). • We derive bounds for the expectation of the true energy in different learning contexts, i.e., regression, classification, and implicit regression, using different energy functions. Our analysis consistently shows that avoiding redundancy by increasing the diversity of the feature set can boost the performance of an EBM. 2 PAC-LEARNING OF EBMS WITH (ϑ− τ )-DIVERSITY In this section, we derive a qualitative justification for (ϑ−τ )-diversity using probably approximately correct (PAC) learning (Valiant, 1984; Mohri et al., 2018; Li et al., 2019). The PAC-based theory for standard EBMs has been established in (Zhang, 2013). First, we start by defining Rademacher complexity: Definition 2. (Bartlett & Mendelson, 2002; Mohri et al., 2018) For a given dataset with m samples S = {xi, yi}mi=1 from a distribution D and for a model space F : X → R with a single dimensional output, the Empirical Rademacher complexity R̂m(F) of the set F is defined as follows: R̂m(F) = Eσ [ sup f∈F 1 m m∑ i=1 σif(xi) ] , (4) where the Rademacher variables σ = {σ1, · · · , σm} are independent uniform random variables in {−1, 1}. The Rademacher complexity Rm(F) is defined as the expectation of the Empirical Rademacher complexity over training set, i.e., Rm(F) = ES∼Dm [R̂m(F)]. Based on this quantity, (Bartlett & Mendelson, 2002), several learning guarantees for EBMs have been shown (Zhang, 2013). We recall the following two lemmas related to the estimation error and the Rademacher complexity. In Lemma 2, we present the principal PAC-learning bound for energy functions with finite outputs. Lemma 1. (Wolf, 2018) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. We have Rm(A) ≤ LgRm(F). (5) Lemma 2. (Zhang, 2013) For a well-defined energy function E(h,x,y) over hypothesis class H, input set X and output set Y (LeCun et al., 2006), the following holds for all h in H with a probability of at least 1− δ E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 2Rm(E) +M √ log(2/δ) 2m , (6) where E is the energy function class defined as E = {E(h,x,y)|h ∈ H}, Rm(E) is its Rademacher complexity, and M is the upper bound of E . Lemma 2 provides a generalization bound for EBMs with well-defined (non-negative) and bounded energy. The expected energy is bounded using the sum of three terms: The first term is the empirical expectation of energy over the training data, the second term depends on the Rademacher complexity of the energy class, and the third term involves the number of the training data m and the upperbound of the energy function M . This shows that merely minimizing the empirical expectation of energy, i.e., the first term, may not yield a good approximation of the true expectation. In (Zhang & LeCun, 2017), it has been shown that regularization using unlabeled data reduces the second and third terms leading to better generalization. In this work, we express these two terms using the (ϑ− τ )-diversity and show that employing a diversity strategy may also decrease the gap between the true and empirical expectation of the energy. In Section 2.1, we consider the special case of regression and derive two bounds for two energy functions based on L1 and L2 distances. In Section 2.2, we derive a bound for the binary classification task using as energy function E(h,x,y) = −yGW (x) (LeCun et al., 2006). In Section 2.3, we consider the case of implicit regression, which encapsulates different learning problems such as metric learning, generative models, and denoising (LeCun et al., 2006). For this case, we use the L2 distance between the inner models as the energy function. In the rest of the paper, we denote the generalization gap, E(x,y)∼D[E(h,x,y)]− 1m ∑ (x,y)∈S E(h,x,y) by ∆D,SE. All the proofs are presented in the supplementary material. 2.1 REGRESSION TASK Regression can be formulated as an energy-based learning problem (Figure 1 (a)) using the inner model h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x). We assume that the feature set is positive and well-defined over the input domain X , i.e., ∀x ∈ X : ||Φ(x)||2 ≤ A, the hypothesis class can be defined as follows: H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, the output set Y ⊂ R is bounded, i.e., y < B, and the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ . The two valid energy functions which can be used for regression are E2(h,x,y) = 12 ||GW (x)−y|| 2 2 and E1(h,x,y) = ||GW (x)−y||1 (LeCun et al., 2006). We study these two cases separately and we show theoretically that for both energy functions avoiding redundancy improves generalization of the EBM model. ENERGY FUNCTION: E2 In this subsection, we present our theoretical analysis on the effect of diversity on the generalization ability of an EBM defined with the energy function E2(h,x,y) = 12 ||GW (x) − y|| 2 2. We start by the following two Lemmas 3 and 4. Lemma 3. With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2). (7) Lemma 4. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (8) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h, x, y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. Lemmas 3 and 4 bound the supremum of the output of the inner model and the energy function as a function of ϑ, respectively. As it can been seen, both terms are decreasing with respect to diversity. Next, we bound the Rademacher complexity of the energy class, i.e., Rm(E). Lemma 5. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F). (9) Lemma 5 expresses the bound of the Rademacher complexity of the energy class using the diversity constant and the Rademacher complexity of the features. Having expressed the different terms of Lemma 2 using diversity, we now present our main result for an energy-basel model trained defined using E2. The main result is presented in Theorem 1. Theorem 1. For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (10) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Theorem 1 express the special case of Lemma 2 using the (ϑ − τ )-diversity of the feature set {ϕ1(·), · · · , ϕD(·)}. As it can been seen, the bound of the generalization error is inversely proportional to ϑ2. This theoretically shows that reducing redundancy, i.e., increasing ϑ, reduces the gap between the true and the empirical energies and improves the generalization performance of the EBMs. ENERGY FUNCTION: E1 In this subsection, we consider the second case of regression using the energy function E1(h,x,y) = ||GW (x) − y||1. Similar to the previous case, we start by deriving bounds for the energy function and the Rademacher complexity of the class using diversity in Lemmas 6 and 7. Lemma 6. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (11) Lemma 7. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F). (12) Next, we derive the main result of the generalization of the EBMs defined using the energy function E1. The main finding is presented in Theorem 2. Theorem 2. For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (13) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Similar to Theorem 1, in Theorem 2, we consistently find that the bound of the true expectation of the energy is a decreasing function with respect to ϑ. This proves that for the regression task reducing redundancy can improve the generalization performance of the energy-based model. 2.2 BINARY CLASSIFIER Here, we consider the problem of binary classification, as illustrated in Figure 1 (b). Using the same assumption as in regression for the inner model, i.e., h(x) = GW (x) = ∑D i=1 wiϕi(x) = wTΦ(x), energy function of E(h,x,y) = −yGW (x) (LeCun et al., 2006), and the (ϑ−τ )-diversity of the feature set, we express Lemma 2 for this specific configuration in Theorem 3. Theorem 3. For the energy function E(h,x,y) = −yGW (x), over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m . (14) Similar to the regression task, we note that the upper-bound of the true expectation is a decreasing function with respect to the diversity term. Thus, a less redundant feature set, i.e., higher ϑ, has a lower upper-bound for the true energy. 2.3 IMPLICIT REGRESSION In this section, we consider the problem of implicit regression. This is a general formulation of a different set of problems such as metric learning, where the goal is to learn a distance function between two domains, image denoising, object detection as illustrated in (LeCun et al., 2006), or semi-supervised learning (Zbontar et al., 2021). This form of EBM (Figure 1 (c)) has two inner models, G1W (·) and G2W (·), which can be equal or different according to the problem at hand. Here, we consider the general case, where the two models correspond to two different combinations of different features, i.e., G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) and G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y). Thus, we have a different (ϑ− τ )-diversity term for each set. The final result is presented in Theorem 4. Theorem 4. For the energy function E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {h(1)(x) = G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), h(2)(x) = G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x : ||Φ(1)(x)||2 ≤ A(1), ∀y : ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H: ∆D,SE ≤ 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) +(J1 + J2) √ log(2/δ) 2m , (15) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . The upper-bound of the energy model depends on the diversity variable of both feature sets. Moreover, we note that the bound for the implicit regression decreases proportionally to ϑ2, as opposed to the classification case for example, where the bound is proportional to ϑ. Thus, we can conclude that reducing redundancy improves the generalization of EBM in the implicit regression context. 2.4 GENERAL DISCUSSION We note that the theory developed in our paper (Theorems 1 to 4) is agnostic to the loss function (LeCun et al., 2006) or the optimization strategy used (Kumar et al., 2019; Song & Ermon, 2019; Yu et al., 2020; Xu et al., 2022). We show that reducing the redundancy of the features consistently decreases the upper-bound of the true expectation of the energy and, thus, can boost the generalization performance of the energy-based model. It also should be noted that A, i.e., the upper bound of the features and ϑ are connected. But our findings can be interpreted as follows: given two models with the same value of A (maximum L2norm of the features), the model with higher diversity ϑ has a lower generalization bound and is likely to generalize better. We note that our analysis is independent of how the features are obtained, e.g., handcrafted or optimized. In fact, in the recent state-of-the-art EBMs (Khalifa et al., 2021; Bakhtin et al., 2021; Yu et al., 2020), the features are typically parameterized using a deep learning model and optimized during training. Our contribution is twofold. First, we provide theoretical guarantees that reducing redundancy in the feature space can indeed improve the generalization of the EBM. This can pave the way toward providing theoretical guarantees for WORKS ON SELF-SUPERVISED LEARNING using redundancy reduction Zbontar et al. (2021); Bardes et al. (2021); Zhao et al. (2017). Second, our theory can be used to motivate novel redundancy reduction strategies, for example, in the form of regularization, to avoid learning redundant features. Such strategies can improve the performance of the model and improve generalization. 3 SIMPLE REGULARIZATION ALGORITHM In general, theoretical generalization bounds can be too loose to be direct practical implications (Zhang et al., 2017; Neyshabur et al., 2017). However, they typically suggest a regularizer to promote some desired aspects of the hypothesis class (Xie et al., 2015; Li et al., 2019; Kawaguchi et al., 2017). Accordingly, inspired by the theoretical analysis in Section 2, we propose a straightforward strategy to avoid learning redundant features by regularizing the model during the training using a term inversely proportional to ϑ− τ -diversity of the features. Given an EBM model with a learnable feature set {ϕ1(·), · · · , ϕD(·)} and a training set S, we propose to augment the original training loss L as follows: Laug = L− β ∑ x∈S D∑ i̸=j (ϕi(x)− ϕj(x))2, (16) where β is a hyper-parameter controlling the contribution of the second term in the total loss. The additional term penalizes the similarities between the distinct features ensuring learning a diverse and non-redundant mapping of the data. As a result, this can improve the general performance of our model. 3.1 TOY EXAMPLE We test our regularization strategy first using a toy data. We use an EBM model to learn the distribution of a 2-D Swiss roll illustrated in Figure 2 (a). For the EBM, we use a fully connected neural network composed of two intermediate layers with 1000 units and ReLu activations. We train the models using Stochastic Gradient Langevin Dynamics (SGLD) sampling and the contrastive divergence-like algorithm proposed in (Du & Mordatch, 2019). The total objective of the standard EBM is expressed as follows: L = 1 N ∑ n ( α ( E(x+n ) 2 + E(x−n ) 2) + E(x+n )− E(x−n ) ) , (17) where x+n denote positive samples and x − n negative samples. We augment this loss using equation 16, i.e., the features are the latent representations obtained at the last intermediate layer. The distribution learned using both the standard and the proposed approach are illustrated using the kernel density estimation (Terrell & Scott, 1992) in Figure 2. As it can be seen, avoiding redundancy boosts the performance of the EBM model. Indeed, by comparing the two learned distributions, the EBM trained with our approach led to a better approximation of the ground-truth distribution and was able to better capture the tail of the distribution as opposed to the original EBM. 3.2 IMAGE GENERATION EXAMPLE Recently, there has been a high interest in using EBMs to solve image/text generation tasks Du & Mordatch (2019); Du et al. (2021); Khalifa et al. (2021); Deng et al. (2020). In this subsection, we validate the proposed regularizer on the simple example of MNIST digits image generation, as in (Du & Mordatch, 2019). For the EBM model, we use a simple CNN model composed of four convolutional layers followed by a linear layer. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics Markov chain Monte Carlo (MCMC) and a sampling buffer to accelerate training. The full details are available in the supplementary material. In this example, the features, i.e., the latent representation obtained at the last intermediate layer, are learned in an end-to-end way. We evaluate the performance of our approach by augmenting the contrastive divergence loss using equation 16 to penalize the feature redundancy. We quantitatively evaluate image quality of EBMs with ‘Fréchet Inception Distance’ (FID) score (Heusel et al., 2017) and the negative log-likelihood (NLL) loss in Table 1 for different values of β. We note that we obtain consistently better FID and NLL scores by penalizing the similarity of the learned features. The best performance is achieved by β = 1e−13, which yields more than 10%, in terms of FID, improvement compared to the original EBM model. To gain insights into the visual performance of our approach, we plot a few intermediate samples of the MCMC sampling (Langevin Dynamics). The results obtained by the EBM with β = 1e−13 are presented in Figure 3. Initiating from random noise, MCMC obtains reasonable figures after only 64 steps. The digits get clearer and more realistic over the iterations. More results are presented in the supplementary material. 3.3 CONTINUAL LEARNING EXAMPLE In this subsection, we validate the proposed regularizer on the Continual Learning (CL) problem. CL tackles the problem of catastrophic forgetting in deep learning models (Parisi et al., 2019; Li & Hoiem, 2017; Shibata et al., 2021). Its main goal is to solve several tasks sequentially without forgetting knowledge learned from the past. So, a continual learner is expected to learn a new task, crucially, without forgetting previous tasks. Recently, an EBM-based CL approach was proposed in (Li et al., 2020) and led to superior results compared to standard approaches. We use the same models and the same experimental protocol used in (Li et al., 2020). However, here we focus only on the class-incremental learning task using CIFAR10 and CIFAR100. We evaluate the performance of our proposed regularizer using both the boundary-aware and boundary-agnostic settings. As defined in (Li et al., 2020), the boundary-aware refers to the situation where the sequence of the tasks has explicit separation between them which is known to the model. The boundary agnostic case refers to the situation where the data distributions gradually changes without a notion of task boundaries. Similar to Section 3.2, we consider as ’features’ the representation obtained by the last intermediate layer. The proposed regularizer is applied on top of this representation. In Table 2, we report the performance of the EBM trained using the original loss and using the loss augmented with our additional term for different values of β. As shown in Table 2, penalizing feature similarity and promoting the diversity of the feature set boosts the performance of the EBM model and consistently leads to a superior accuracy for both datasets. In Figure 4, we display the accumulated classification accuracy, averaged over tasks, on the test set. Along the five tasks, our approach maintains higher classification accuracy than the standard EBM for both the boundary-aware and boundary-agnostic settings. 4 CONCLUSION Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative systems. An EBM is typically formed of one (or many) inner models which learn a combination of different features to generate an energy mapping for each input configuration. In this paper, we introduced a feature diversity concept, i.e., (ϑ − τ )-diversity, and we used it to extend the PAC theory of EBMs. We derived different generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we consistently found that reducing the redundancy of the feature set can improve the generalization error of energy-based approaches. We also note that our theory is independent of the loss function or the training strategy used to optimize the parameters of the EBM. This provides theoretical guarantees on learning via feature redundancy reduction. Our preliminary experimental results confirm that this is indeed a promising research direction and can motivate developing other approaches to promoting the diversity of the feature set. Future direction include more extensive experimental evaluation of different feature redundancy reduction approaches. A PROOF OF LEMMA 3 Lemma With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2), (18) where A = supx ||ϕ(x)||2. Proof. h2(x) = ( D∑ i=1 wiϕi(x) )2 ≤ ( D∑ i=1 ||w||∞ϕi(x) )2 = ||w||2∞ ( D∑ i=1 ϕi(x) )2 = ||w||2∞ (∑ i,j ϕi(x)ϕj(x) ) = ||w||2∞ ∑ i ϕi(x) 2 + ∑ i ̸=j ϕi(x)ϕj(x) (19) We have ||Φ(x)||2 ≤ A. For the first term in equation 19, we have ∑ m ϕm(x) 2 ≤ A2. By using the identity ϕm(x)ϕn(x) = 12 ( ϕm(x) 2 + ϕn(x) 2 − (ϕm(x)− ϕn(x))2 ) , the second term can be rewritten as∑ m ̸=n ϕm(x)ϕn(x) = 1 2 ∑ m̸=n ( ϕm(x) 2 + ϕn(x) 2 − ( ϕm(x)− ϕn(x) )2) . (20) In addition, we have with a probability τ , 12 ∑ m ̸=n(ϕm(x)− ϕn(x))2 ≥ ϑ2. Thus, we have with a probability at least τ :∑ m̸=n ϕm(x)ϕn(x) ≤ 1 2 (2(D − 1)A2 − 2ϑ2) = (D − 1)A2 − ϑ2. (21) By putting everything back to equation 19, we have with a probability τ , G2W (x) ≤ ||w||2∞ ( A2 + (D − 1)A2 − ϑ2 ) = ||w||2∞(DA2 − ϑ2). (22) Thus, with a probability τ , sup x,W |h(x)| ≤ √ sup x,W G2W (x) ≤ ||w||∞ √ DA2 − ϑ2. (23) B PROOF OF LEMMA 4 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (24) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h,x,y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. C PROOF OF LEMMA 5 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F) (25) Proof. Using the decomposition property of the Rademacher complexity (if ϕ is a L-Lipschitz function, then Rm(ϕ(A)) ≤ LRm(A)) and given that 12 ||. − y|| 2 is K-Lipschitz with a constant K = supx,y,h||h(x) − y|| ≤ (||w||∞ √ DA2 − ϑ2 + B), we have Rm(E) ≤ KRm(H) = (||w||∞ √ DA2 − ϑ2 + B)Rm(H), where H = {GW (x) = ∑D i=1 wiϕi(x) }. We also know that ||w||1 ≤ D||w||∞. Next, similar to the proof of Theorem 2.10 in (Wolf, 2018), we note that ∑D i=1 wiϕi(x) ∈ (D||w||∞)conv(F + −(F)) := G, where conv denotes the convex hull and F is the set of ϕ functions. Thus, Rm(H) ≤ Rm(G) = D||w||∞Rm(conv(F + (−F)) = D||w||∞Rm(F + (−F)) = 2D||w||∞Rm(F). D PROOF OF THEOREM 1 Theorem For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (26) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 4 and Lemma 5. E PROOF OF LEMMA 6 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (27) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). F PROOF OF LEMMA 7 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (28) Proof. |.| is 1-Lipschitz, Thus Rm(E) ≤ Rm(H). G PROOF OF THEOREM 2 Theorem For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (29) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 6 and Lemma 7. H PROOF OF THEOREM 3 Lemma 8. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ ||w||∞ √ DA2 − ϑ2. (30) Proof. We have sup−yGW (x) ≤ sup |GW (x)| ≤ ||w||∞ √ DA2 − ϑ2. Lemma 9. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (31) Proof. We note that for y ∈ {−1, 1}, σ and −yσ follow the same distribution. Thus, we have Rm(E) = Rm(H). Next, we note that Rm(H) ≤ 2D||w||∞Rm(F). Theorem 3 For a well-defined energy function E(h,x,y) (LeCun et al., 2006), over hypothesis class H, input set X and output set Y , if it has upper-bound M, then with a probability of at least 1− δ, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m , (32) Proof. We replace the variables in Lemma 1 using Lemma 8 and Lemma 9. I PROOF OF THEOREM 4 Lemma 10. With a probability of at least τ1τ2, we have sup x,y,h |E(h,x,y)| ≤ ( J1 + J2 ) (33) Proof. We have ||G(1)W (x) − G (2) W (y)||22 ≤ 2(||G (1) W (x)||22 + ||G (2) W (y)||22). Similar to Theorem 1, we have sup ||G(1)W (x)||22 ≤ ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) = J1 and sup ||G(2)W (y)||22 ≤ ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) = J2. We also have E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. Lemma 11. With a probability of at least τ1τ2, we have Rm(E) ≤ 4( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) (34) Proof. Let f be the square function, i.e., f(x) = 12x 2 and E0 = {G(1)W (x) − G (2) W (y) | x ∈ X , y ∈ Y}. We have E = f(E0 + (−E0)). f is Lipschitz over the input space, with a constant L bounded by supx,W G (1) W (x) + supy,W G (2) W (y) ≤ √ J1 + √ J2. Thus, we have Rm(E) ≤ ( √ J1 + √ J2)Rm(E0 + (−E0)) ≤ 2( √ J1 + √ J2)Rm(E0). Next, we note that Rm(E0) = Rm(H1 + (−H2)) = Rm(H1) + Rm(H2). Using same as technique as in Lemma 4, we have Rm(H1) ≤ 2D(1)||w(1)||∞Rm(F1) and Rm(H2) ≤ 2D(2)||w(2)||∞Rm(F2). Theorem 4 For the energy function E(h,x,y) = 12 ||G (1) W (x) − G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), G (2) W (y) =∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x ||Φ(1)(x)||2 ≤ A(1), ∀y ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) + ( J1 + J2 )√ log(2/δ) 2m , (35) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . Proof. We replace the variables in Lemma 1 using Lemma 10 and Lemma 11. J IMAGE GENERATION EXAMPLE SETTINGS AND ADDITIONAL RESULTS For the EBM model, we used a simple CNN model composed of four convolutional layers followed by a linear layer. The full CNN model is presented in Table 3. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics MCMC and a sampling buffer to accelerate training. All models were trained for 60 epochs using Adam optimizer with learning rate lr = 1e − 4 and a batch size of 128. In addition to the results presented in the paper, Figure 5 presents additional qualitative results. For the first two examples (top ones), the model is able to converge to a realistic image within reasonable amount of iterations. For the last two examples (in the bottom), we present failure cases of our approach. For these two tests, the generated image still improves over iterations. However, the model failed to converge to a clear realistic MNIST image after 256 steps.
1. What is the focus and contribution of the paper regarding feature diversity in PAC theory of EBMs? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and experimental results? 3. Do you have any concerns regarding the similarity between the paper and a previous workshop paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper extends the PAC theory of EBMs to analyze the impact of feature diversity on the performance of EBMs. The generalization bounds for regression, binary classification and implicit regression w.r.t. the feature set redundancy are derived. Experimental results on MNIST for image generation and Cifar10/100 for continual learning are provided. Strengths And Weaknesses Strength: The paper is well organized and well written. The extension of the PAC theory of EBMs of Zhang et al. to feature diversity analysis is interesting and looks solid. But frankly, it’s hard for me to fully verify all the mathematical derivations given such a short ICLR review timeline. Weakness: I have been working in this area for a while. It happened to me that I read a similar publication a year ago that has the same title (https://openreview.net/forum?id=ks3Q08yy66rv). The theory part of the current submission is almost the exact copy & paste of the workshop paper above. The only new addition is the experiment section. So, to my understanding, the contribution of this paper is mainly empirical study. The experimental evaluations are very limited to small benchmarks and non-competitive baselines. For example, the image generation is only limited to MNIST without considering at least Cifar10/100/CelebA etc. In terms of the baseline algorithms, only the EBM algorithm of Du et al 2019 is compared and all latest SOTA EBM methods are ignored. Although the authors show that the gains are consistent. But they seem very small. Even on the weak baseline of CL, where EBM-CL’s accuracies on CIFAR10/100 are about 30%-40%, the improvements are insignificant. Clarity, Quality, Novelty And Reproducibility The theory part of the current submission is almost the same as the workshop paper. The only new addition is the experiment section. So, to my understanding, the contribution is mainly empirical study, and the novelty is low. The main algorithmic change is the regularization (Eq. 16), which is straightforward to implement.
ICLR
Title On Feature Diversity in Energy-based Models Abstract Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches. An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration. In this paper, we focus on the diversity of the produced feature set. We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs. We derive generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we show that indeed reducing redundancy of the feature set can consistently decrease the gap between the true and empirical expectation of the energy and boosts the performance of the model. 1 INTRODUCTION The energy-based learning paradigm was first proposed by Zhu & Mumford (1998); LeCun et al. (2006) as an alternative to probabilistic graphical models (Koller & Friedman, 2009). As their name suggests, energy-based models (EBMs) map each input ‘configuration’ to a single scalar, called the ‘energy’. In the learning phase, the parameters of the model are optimized by associating the desired configurations with small energy values and the undesired ones with higher energy values (Kumar et al., 2019; Song & Ermon, 2019; Yang et al., 2016). In the inference phase, given an incomplete input configuration, the energy surface is explored to find the remaining variables which yield the lowest energy. EBMs encapsulate solutions to several supervised approaches (LeCun et al., 2006; Fang & Liu, 2016) and unsupervised learning problems (Deng et al., 2020; Bakhtin et al., 2021; Zhao et al., 2020; Xu et al., 2022) and provide a common theoretical framework for many learning models, including traditional discriminative (Zhai et al., 2016; Li et al., 2020) and generative (Zhu & Mumford, 1998; Xie et al., 2017b; Zhao et al., 2017; Che et al., 2020; Khalifa et al., 2021) approaches. Formally, let us denote the energy function by E(h,x,y), where h = GW (x) represents the model with parameters W to be optimized during training and x,y are sets of variables. Figure 1 illustrates how classification, regression, and implicit regression can be expressed as EBMs. In Figure 1 (a), a regression scenario is presented. The input x, e.g., an image, is transformed using an inner model GW (x) and its distance, to the second input y is computed yielding the energy function. A valid energy function in this case can be the L1 or the L2 distance. In the binary classification case (Figure 1 (b)), the energy can be defined as E(h,x,y) = −yGW (x) . In the implicit regression case (Figure 1 (c)), we have two inner models and the energy can be defined as the L2 distance between their outputs E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. In the inference phase, given an input x, the label y∗ can be obtained by solving the following optimization problem: y∗ = argmin y E(h,x,y). (1) An EBM typically relies on an inner model, i.e., GW (x), to generate the desired energy landscape (LeCun et al., 2006). Depending on the problem at hand, this function can be constructed as a linear projection, a kernel method, or a neural network and its parameters are optimized in a data-driven manner in the training phase. Formally, GW (x) can be written as GW (x) = D∑ i wiϕi(x), (2) where {ϕ1(·), · · · , ϕD(·)} is the feature set, which can be hand-crafted, separately trained from unlabeled data (Zhang & LeCun, 2017), or modeled by a neural network and optimized in the training phase of the EBM model (Xie et al., 2016; Yu et al., 2020; Xie et al., 2021). In the rest of the paper, we assume that the inner models GW defined in the energy-based learning system (Figure 1) are obtained as a weighted sum of different features as expressed in equation 2. In (Zhang, 2013), it was shown that simply minimizing the empirical energy over the training data does not theoretically guarantee the minimization of the expected value of the true energy. Thus, developing and motivating novel regularization techniques is required (Zhang & LeCun, 2017). We argue that the quality of the feature set {ϕ1(·), · · · , ϕD(·)} plays a critical role in the overall performance of the global model. In this work, we extend the theoretical analysis of (Zhang, 2013) and focus on the ‘diversity’ of this set and its effect on the generalization ability of the EBM models. Intuitively, it is clear that a less correlated set of intermediate representations is richer and thus able to capture more complex patterns in the input. Thus, it is important to avoid redundant features for achieving a better performance. However, a theoretical analysis is missing. We start by quantifying the diversity of a set of feature functions. To this end, we introduce ϑ− τ -diversity: Definition 1 ((ϑ− τ )-diversity). A set of feature functions, {ϕ1(·), · · · , ϕD(·)} is called ϑ-diverse, if there exists a constant ϑ ∈ R, such that for every input x we have 1 2 D∑ i ̸=j (ϕi(x)− ϕj(x))2 ≥ ϑ2 (3) with a high probability τ . Intuitively, if two feature maps ϕi(·) and ϕj(·) are non-redundant, they have different outputs for the same input with a high probability. However, if, for example, the features are extracted using a neural network with a ReLU activation function, there is a high probability that some of the features associated with the input will be zero. Thus, defining a lower bound for the pair-wise diversity directly is impractical. Therefore, we quantify diversity as the lower-bound over the sum of the pair-wise distances of the feature maps as expressed in equation 3 and ϑ measures the diversity of a set. In machine learning context, diversity has been explored in ensemble learning (Li et al., 2012; Yu et al., 2011; Li et al., 2017), sampling (Derezinski et al., 2019; Bıyık et al., 2019), ranking (Wu et al., 2019; Qin & Zhu, 2013), pruning (Singh et al., 2020; Lee et al., 2020), and neural networks (Xie et al., 2015; Shen et al., 2021). In Xie et al. (2015; 2017a), it was shown theoretically and experimentally that avoiding redundancy over the weights of a neural network using the mutual angles as a diversity measure improves the generalization ability of the model. In this work, we explore a new line of research, where diversity is defined over the feature maps directly, using the (ϑ− τ )-diversity, in the context of energy-based learning. In (Zhao et al., 2017), a similar idea was empirically explored. A “repelling regularizer” was proposed to force non-redundant or orthogonal feature representations. Moreover, the idea of learning while avoiding redundancy has been used recently in the context of semi-supervised learning (Zbontar et al., 2021; Bardes et al., 2021). Reducing redundancy by minimizing the cross-correlation of features learned using a Siamese network (Zbontar et al., 2021) was empirically shown to improve the generalization ability, yet a theoretical analysis to prove this has so far been lacking. In this paper, we close the gap between empirical experience and theory. We theoretically study the generalization ability of EBMs in different learning contexts, i.e., regression, classification, implicit regression, and we derive new generalization bounds using the (ϑ−τ )-diversity providing theoretical guarantees that avoiding redundancy indeed improves the generalization ability of the model. The contributions of this paper can be summarized as follows: • We explore a new line of research, where diversity is defined over the features representing the input data and not over the model’s parameters. To this end, we introduce (ϑ − τ )- diversity as a quantification of the diversity of a given feature set. • We extend the theoretical analysis (Zhang, 2013) and study the effect of avoiding redundancy of a feature set on the generalization of EBMs (Lemmas 3 to 7 and Theorem 1 to 5). • We derive bounds for the expectation of the true energy in different learning contexts, i.e., regression, classification, and implicit regression, using different energy functions. Our analysis consistently shows that avoiding redundancy by increasing the diversity of the feature set can boost the performance of an EBM. 2 PAC-LEARNING OF EBMS WITH (ϑ− τ )-DIVERSITY In this section, we derive a qualitative justification for (ϑ−τ )-diversity using probably approximately correct (PAC) learning (Valiant, 1984; Mohri et al., 2018; Li et al., 2019). The PAC-based theory for standard EBMs has been established in (Zhang, 2013). First, we start by defining Rademacher complexity: Definition 2. (Bartlett & Mendelson, 2002; Mohri et al., 2018) For a given dataset with m samples S = {xi, yi}mi=1 from a distribution D and for a model space F : X → R with a single dimensional output, the Empirical Rademacher complexity R̂m(F) of the set F is defined as follows: R̂m(F) = Eσ [ sup f∈F 1 m m∑ i=1 σif(xi) ] , (4) where the Rademacher variables σ = {σ1, · · · , σm} are independent uniform random variables in {−1, 1}. The Rademacher complexity Rm(F) is defined as the expectation of the Empirical Rademacher complexity over training set, i.e., Rm(F) = ES∼Dm [R̂m(F)]. Based on this quantity, (Bartlett & Mendelson, 2002), several learning guarantees for EBMs have been shown (Zhang, 2013). We recall the following two lemmas related to the estimation error and the Rademacher complexity. In Lemma 2, we present the principal PAC-learning bound for energy functions with finite outputs. Lemma 1. (Wolf, 2018) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. We have Rm(A) ≤ LgRm(F). (5) Lemma 2. (Zhang, 2013) For a well-defined energy function E(h,x,y) over hypothesis class H, input set X and output set Y (LeCun et al., 2006), the following holds for all h in H with a probability of at least 1− δ E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 2Rm(E) +M √ log(2/δ) 2m , (6) where E is the energy function class defined as E = {E(h,x,y)|h ∈ H}, Rm(E) is its Rademacher complexity, and M is the upper bound of E . Lemma 2 provides a generalization bound for EBMs with well-defined (non-negative) and bounded energy. The expected energy is bounded using the sum of three terms: The first term is the empirical expectation of energy over the training data, the second term depends on the Rademacher complexity of the energy class, and the third term involves the number of the training data m and the upperbound of the energy function M . This shows that merely minimizing the empirical expectation of energy, i.e., the first term, may not yield a good approximation of the true expectation. In (Zhang & LeCun, 2017), it has been shown that regularization using unlabeled data reduces the second and third terms leading to better generalization. In this work, we express these two terms using the (ϑ− τ )-diversity and show that employing a diversity strategy may also decrease the gap between the true and empirical expectation of the energy. In Section 2.1, we consider the special case of regression and derive two bounds for two energy functions based on L1 and L2 distances. In Section 2.2, we derive a bound for the binary classification task using as energy function E(h,x,y) = −yGW (x) (LeCun et al., 2006). In Section 2.3, we consider the case of implicit regression, which encapsulates different learning problems such as metric learning, generative models, and denoising (LeCun et al., 2006). For this case, we use the L2 distance between the inner models as the energy function. In the rest of the paper, we denote the generalization gap, E(x,y)∼D[E(h,x,y)]− 1m ∑ (x,y)∈S E(h,x,y) by ∆D,SE. All the proofs are presented in the supplementary material. 2.1 REGRESSION TASK Regression can be formulated as an energy-based learning problem (Figure 1 (a)) using the inner model h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x). We assume that the feature set is positive and well-defined over the input domain X , i.e., ∀x ∈ X : ||Φ(x)||2 ≤ A, the hypothesis class can be defined as follows: H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, the output set Y ⊂ R is bounded, i.e., y < B, and the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ . The two valid energy functions which can be used for regression are E2(h,x,y) = 12 ||GW (x)−y|| 2 2 and E1(h,x,y) = ||GW (x)−y||1 (LeCun et al., 2006). We study these two cases separately and we show theoretically that for both energy functions avoiding redundancy improves generalization of the EBM model. ENERGY FUNCTION: E2 In this subsection, we present our theoretical analysis on the effect of diversity on the generalization ability of an EBM defined with the energy function E2(h,x,y) = 12 ||GW (x) − y|| 2 2. We start by the following two Lemmas 3 and 4. Lemma 3. With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2). (7) Lemma 4. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (8) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h, x, y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. Lemmas 3 and 4 bound the supremum of the output of the inner model and the energy function as a function of ϑ, respectively. As it can been seen, both terms are decreasing with respect to diversity. Next, we bound the Rademacher complexity of the energy class, i.e., Rm(E). Lemma 5. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F). (9) Lemma 5 expresses the bound of the Rademacher complexity of the energy class using the diversity constant and the Rademacher complexity of the features. Having expressed the different terms of Lemma 2 using diversity, we now present our main result for an energy-basel model trained defined using E2. The main result is presented in Theorem 1. Theorem 1. For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (10) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Theorem 1 express the special case of Lemma 2 using the (ϑ − τ )-diversity of the feature set {ϕ1(·), · · · , ϕD(·)}. As it can been seen, the bound of the generalization error is inversely proportional to ϑ2. This theoretically shows that reducing redundancy, i.e., increasing ϑ, reduces the gap between the true and the empirical energies and improves the generalization performance of the EBMs. ENERGY FUNCTION: E1 In this subsection, we consider the second case of regression using the energy function E1(h,x,y) = ||GW (x) − y||1. Similar to the previous case, we start by deriving bounds for the energy function and the Rademacher complexity of the class using diversity in Lemmas 6 and 7. Lemma 6. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (11) Lemma 7. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F). (12) Next, we derive the main result of the generalization of the EBMs defined using the energy function E1. The main finding is presented in Theorem 2. Theorem 2. For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (13) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Similar to Theorem 1, in Theorem 2, we consistently find that the bound of the true expectation of the energy is a decreasing function with respect to ϑ. This proves that for the regression task reducing redundancy can improve the generalization performance of the energy-based model. 2.2 BINARY CLASSIFIER Here, we consider the problem of binary classification, as illustrated in Figure 1 (b). Using the same assumption as in regression for the inner model, i.e., h(x) = GW (x) = ∑D i=1 wiϕi(x) = wTΦ(x), energy function of E(h,x,y) = −yGW (x) (LeCun et al., 2006), and the (ϑ−τ )-diversity of the feature set, we express Lemma 2 for this specific configuration in Theorem 3. Theorem 3. For the energy function E(h,x,y) = −yGW (x), over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m . (14) Similar to the regression task, we note that the upper-bound of the true expectation is a decreasing function with respect to the diversity term. Thus, a less redundant feature set, i.e., higher ϑ, has a lower upper-bound for the true energy. 2.3 IMPLICIT REGRESSION In this section, we consider the problem of implicit regression. This is a general formulation of a different set of problems such as metric learning, where the goal is to learn a distance function between two domains, image denoising, object detection as illustrated in (LeCun et al., 2006), or semi-supervised learning (Zbontar et al., 2021). This form of EBM (Figure 1 (c)) has two inner models, G1W (·) and G2W (·), which can be equal or different according to the problem at hand. Here, we consider the general case, where the two models correspond to two different combinations of different features, i.e., G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) and G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y). Thus, we have a different (ϑ− τ )-diversity term for each set. The final result is presented in Theorem 4. Theorem 4. For the energy function E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {h(1)(x) = G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), h(2)(x) = G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x : ||Φ(1)(x)||2 ≤ A(1), ∀y : ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H: ∆D,SE ≤ 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) +(J1 + J2) √ log(2/δ) 2m , (15) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . The upper-bound of the energy model depends on the diversity variable of both feature sets. Moreover, we note that the bound for the implicit regression decreases proportionally to ϑ2, as opposed to the classification case for example, where the bound is proportional to ϑ. Thus, we can conclude that reducing redundancy improves the generalization of EBM in the implicit regression context. 2.4 GENERAL DISCUSSION We note that the theory developed in our paper (Theorems 1 to 4) is agnostic to the loss function (LeCun et al., 2006) or the optimization strategy used (Kumar et al., 2019; Song & Ermon, 2019; Yu et al., 2020; Xu et al., 2022). We show that reducing the redundancy of the features consistently decreases the upper-bound of the true expectation of the energy and, thus, can boost the generalization performance of the energy-based model. It also should be noted that A, i.e., the upper bound of the features and ϑ are connected. But our findings can be interpreted as follows: given two models with the same value of A (maximum L2norm of the features), the model with higher diversity ϑ has a lower generalization bound and is likely to generalize better. We note that our analysis is independent of how the features are obtained, e.g., handcrafted or optimized. In fact, in the recent state-of-the-art EBMs (Khalifa et al., 2021; Bakhtin et al., 2021; Yu et al., 2020), the features are typically parameterized using a deep learning model and optimized during training. Our contribution is twofold. First, we provide theoretical guarantees that reducing redundancy in the feature space can indeed improve the generalization of the EBM. This can pave the way toward providing theoretical guarantees for WORKS ON SELF-SUPERVISED LEARNING using redundancy reduction Zbontar et al. (2021); Bardes et al. (2021); Zhao et al. (2017). Second, our theory can be used to motivate novel redundancy reduction strategies, for example, in the form of regularization, to avoid learning redundant features. Such strategies can improve the performance of the model and improve generalization. 3 SIMPLE REGULARIZATION ALGORITHM In general, theoretical generalization bounds can be too loose to be direct practical implications (Zhang et al., 2017; Neyshabur et al., 2017). However, they typically suggest a regularizer to promote some desired aspects of the hypothesis class (Xie et al., 2015; Li et al., 2019; Kawaguchi et al., 2017). Accordingly, inspired by the theoretical analysis in Section 2, we propose a straightforward strategy to avoid learning redundant features by regularizing the model during the training using a term inversely proportional to ϑ− τ -diversity of the features. Given an EBM model with a learnable feature set {ϕ1(·), · · · , ϕD(·)} and a training set S, we propose to augment the original training loss L as follows: Laug = L− β ∑ x∈S D∑ i̸=j (ϕi(x)− ϕj(x))2, (16) where β is a hyper-parameter controlling the contribution of the second term in the total loss. The additional term penalizes the similarities between the distinct features ensuring learning a diverse and non-redundant mapping of the data. As a result, this can improve the general performance of our model. 3.1 TOY EXAMPLE We test our regularization strategy first using a toy data. We use an EBM model to learn the distribution of a 2-D Swiss roll illustrated in Figure 2 (a). For the EBM, we use a fully connected neural network composed of two intermediate layers with 1000 units and ReLu activations. We train the models using Stochastic Gradient Langevin Dynamics (SGLD) sampling and the contrastive divergence-like algorithm proposed in (Du & Mordatch, 2019). The total objective of the standard EBM is expressed as follows: L = 1 N ∑ n ( α ( E(x+n ) 2 + E(x−n ) 2) + E(x+n )− E(x−n ) ) , (17) where x+n denote positive samples and x − n negative samples. We augment this loss using equation 16, i.e., the features are the latent representations obtained at the last intermediate layer. The distribution learned using both the standard and the proposed approach are illustrated using the kernel density estimation (Terrell & Scott, 1992) in Figure 2. As it can be seen, avoiding redundancy boosts the performance of the EBM model. Indeed, by comparing the two learned distributions, the EBM trained with our approach led to a better approximation of the ground-truth distribution and was able to better capture the tail of the distribution as opposed to the original EBM. 3.2 IMAGE GENERATION EXAMPLE Recently, there has been a high interest in using EBMs to solve image/text generation tasks Du & Mordatch (2019); Du et al. (2021); Khalifa et al. (2021); Deng et al. (2020). In this subsection, we validate the proposed regularizer on the simple example of MNIST digits image generation, as in (Du & Mordatch, 2019). For the EBM model, we use a simple CNN model composed of four convolutional layers followed by a linear layer. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics Markov chain Monte Carlo (MCMC) and a sampling buffer to accelerate training. The full details are available in the supplementary material. In this example, the features, i.e., the latent representation obtained at the last intermediate layer, are learned in an end-to-end way. We evaluate the performance of our approach by augmenting the contrastive divergence loss using equation 16 to penalize the feature redundancy. We quantitatively evaluate image quality of EBMs with ‘Fréchet Inception Distance’ (FID) score (Heusel et al., 2017) and the negative log-likelihood (NLL) loss in Table 1 for different values of β. We note that we obtain consistently better FID and NLL scores by penalizing the similarity of the learned features. The best performance is achieved by β = 1e−13, which yields more than 10%, in terms of FID, improvement compared to the original EBM model. To gain insights into the visual performance of our approach, we plot a few intermediate samples of the MCMC sampling (Langevin Dynamics). The results obtained by the EBM with β = 1e−13 are presented in Figure 3. Initiating from random noise, MCMC obtains reasonable figures after only 64 steps. The digits get clearer and more realistic over the iterations. More results are presented in the supplementary material. 3.3 CONTINUAL LEARNING EXAMPLE In this subsection, we validate the proposed regularizer on the Continual Learning (CL) problem. CL tackles the problem of catastrophic forgetting in deep learning models (Parisi et al., 2019; Li & Hoiem, 2017; Shibata et al., 2021). Its main goal is to solve several tasks sequentially without forgetting knowledge learned from the past. So, a continual learner is expected to learn a new task, crucially, without forgetting previous tasks. Recently, an EBM-based CL approach was proposed in (Li et al., 2020) and led to superior results compared to standard approaches. We use the same models and the same experimental protocol used in (Li et al., 2020). However, here we focus only on the class-incremental learning task using CIFAR10 and CIFAR100. We evaluate the performance of our proposed regularizer using both the boundary-aware and boundary-agnostic settings. As defined in (Li et al., 2020), the boundary-aware refers to the situation where the sequence of the tasks has explicit separation between them which is known to the model. The boundary agnostic case refers to the situation where the data distributions gradually changes without a notion of task boundaries. Similar to Section 3.2, we consider as ’features’ the representation obtained by the last intermediate layer. The proposed regularizer is applied on top of this representation. In Table 2, we report the performance of the EBM trained using the original loss and using the loss augmented with our additional term for different values of β. As shown in Table 2, penalizing feature similarity and promoting the diversity of the feature set boosts the performance of the EBM model and consistently leads to a superior accuracy for both datasets. In Figure 4, we display the accumulated classification accuracy, averaged over tasks, on the test set. Along the five tasks, our approach maintains higher classification accuracy than the standard EBM for both the boundary-aware and boundary-agnostic settings. 4 CONCLUSION Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative systems. An EBM is typically formed of one (or many) inner models which learn a combination of different features to generate an energy mapping for each input configuration. In this paper, we introduced a feature diversity concept, i.e., (ϑ − τ )-diversity, and we used it to extend the PAC theory of EBMs. We derived different generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we consistently found that reducing the redundancy of the feature set can improve the generalization error of energy-based approaches. We also note that our theory is independent of the loss function or the training strategy used to optimize the parameters of the EBM. This provides theoretical guarantees on learning via feature redundancy reduction. Our preliminary experimental results confirm that this is indeed a promising research direction and can motivate developing other approaches to promoting the diversity of the feature set. Future direction include more extensive experimental evaluation of different feature redundancy reduction approaches. A PROOF OF LEMMA 3 Lemma With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2), (18) where A = supx ||ϕ(x)||2. Proof. h2(x) = ( D∑ i=1 wiϕi(x) )2 ≤ ( D∑ i=1 ||w||∞ϕi(x) )2 = ||w||2∞ ( D∑ i=1 ϕi(x) )2 = ||w||2∞ (∑ i,j ϕi(x)ϕj(x) ) = ||w||2∞ ∑ i ϕi(x) 2 + ∑ i ̸=j ϕi(x)ϕj(x) (19) We have ||Φ(x)||2 ≤ A. For the first term in equation 19, we have ∑ m ϕm(x) 2 ≤ A2. By using the identity ϕm(x)ϕn(x) = 12 ( ϕm(x) 2 + ϕn(x) 2 − (ϕm(x)− ϕn(x))2 ) , the second term can be rewritten as∑ m ̸=n ϕm(x)ϕn(x) = 1 2 ∑ m̸=n ( ϕm(x) 2 + ϕn(x) 2 − ( ϕm(x)− ϕn(x) )2) . (20) In addition, we have with a probability τ , 12 ∑ m ̸=n(ϕm(x)− ϕn(x))2 ≥ ϑ2. Thus, we have with a probability at least τ :∑ m̸=n ϕm(x)ϕn(x) ≤ 1 2 (2(D − 1)A2 − 2ϑ2) = (D − 1)A2 − ϑ2. (21) By putting everything back to equation 19, we have with a probability τ , G2W (x) ≤ ||w||2∞ ( A2 + (D − 1)A2 − ϑ2 ) = ||w||2∞(DA2 − ϑ2). (22) Thus, with a probability τ , sup x,W |h(x)| ≤ √ sup x,W G2W (x) ≤ ||w||∞ √ DA2 − ϑ2. (23) B PROOF OF LEMMA 4 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (24) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h,x,y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. C PROOF OF LEMMA 5 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F) (25) Proof. Using the decomposition property of the Rademacher complexity (if ϕ is a L-Lipschitz function, then Rm(ϕ(A)) ≤ LRm(A)) and given that 12 ||. − y|| 2 is K-Lipschitz with a constant K = supx,y,h||h(x) − y|| ≤ (||w||∞ √ DA2 − ϑ2 + B), we have Rm(E) ≤ KRm(H) = (||w||∞ √ DA2 − ϑ2 + B)Rm(H), where H = {GW (x) = ∑D i=1 wiϕi(x) }. We also know that ||w||1 ≤ D||w||∞. Next, similar to the proof of Theorem 2.10 in (Wolf, 2018), we note that ∑D i=1 wiϕi(x) ∈ (D||w||∞)conv(F + −(F)) := G, where conv denotes the convex hull and F is the set of ϕ functions. Thus, Rm(H) ≤ Rm(G) = D||w||∞Rm(conv(F + (−F)) = D||w||∞Rm(F + (−F)) = 2D||w||∞Rm(F). D PROOF OF THEOREM 1 Theorem For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (26) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 4 and Lemma 5. E PROOF OF LEMMA 6 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (27) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). F PROOF OF LEMMA 7 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (28) Proof. |.| is 1-Lipschitz, Thus Rm(E) ≤ Rm(H). G PROOF OF THEOREM 2 Theorem For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (29) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 6 and Lemma 7. H PROOF OF THEOREM 3 Lemma 8. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ ||w||∞ √ DA2 − ϑ2. (30) Proof. We have sup−yGW (x) ≤ sup |GW (x)| ≤ ||w||∞ √ DA2 − ϑ2. Lemma 9. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (31) Proof. We note that for y ∈ {−1, 1}, σ and −yσ follow the same distribution. Thus, we have Rm(E) = Rm(H). Next, we note that Rm(H) ≤ 2D||w||∞Rm(F). Theorem 3 For a well-defined energy function E(h,x,y) (LeCun et al., 2006), over hypothesis class H, input set X and output set Y , if it has upper-bound M, then with a probability of at least 1− δ, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m , (32) Proof. We replace the variables in Lemma 1 using Lemma 8 and Lemma 9. I PROOF OF THEOREM 4 Lemma 10. With a probability of at least τ1τ2, we have sup x,y,h |E(h,x,y)| ≤ ( J1 + J2 ) (33) Proof. We have ||G(1)W (x) − G (2) W (y)||22 ≤ 2(||G (1) W (x)||22 + ||G (2) W (y)||22). Similar to Theorem 1, we have sup ||G(1)W (x)||22 ≤ ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) = J1 and sup ||G(2)W (y)||22 ≤ ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) = J2. We also have E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. Lemma 11. With a probability of at least τ1τ2, we have Rm(E) ≤ 4( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) (34) Proof. Let f be the square function, i.e., f(x) = 12x 2 and E0 = {G(1)W (x) − G (2) W (y) | x ∈ X , y ∈ Y}. We have E = f(E0 + (−E0)). f is Lipschitz over the input space, with a constant L bounded by supx,W G (1) W (x) + supy,W G (2) W (y) ≤ √ J1 + √ J2. Thus, we have Rm(E) ≤ ( √ J1 + √ J2)Rm(E0 + (−E0)) ≤ 2( √ J1 + √ J2)Rm(E0). Next, we note that Rm(E0) = Rm(H1 + (−H2)) = Rm(H1) + Rm(H2). Using same as technique as in Lemma 4, we have Rm(H1) ≤ 2D(1)||w(1)||∞Rm(F1) and Rm(H2) ≤ 2D(2)||w(2)||∞Rm(F2). Theorem 4 For the energy function E(h,x,y) = 12 ||G (1) W (x) − G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), G (2) W (y) =∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x ||Φ(1)(x)||2 ≤ A(1), ∀y ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) + ( J1 + J2 )√ log(2/δ) 2m , (35) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . Proof. We replace the variables in Lemma 1 using Lemma 10 and Lemma 11. J IMAGE GENERATION EXAMPLE SETTINGS AND ADDITIONAL RESULTS For the EBM model, we used a simple CNN model composed of four convolutional layers followed by a linear layer. The full CNN model is presented in Table 3. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics MCMC and a sampling buffer to accelerate training. All models were trained for 60 epochs using Adam optimizer with learning rate lr = 1e − 4 and a batch size of 128. In addition to the results presented in the paper, Figure 5 presents additional qualitative results. For the first two examples (top ones), the model is able to converge to a realistic image within reasonable amount of iterations. For the last two examples (in the bottom), we present failure cases of our approach. For these two tests, the generated image still improves over iterations. However, the model failed to converge to a clear realistic MNIST image after 256 steps.
1. What is the main contribution of the paper regarding energy-based approaches? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its simplicity and experimental results? 3. Do you have any concerns about the definition of v-diversity and its representation of feature set diversity? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or inconsistencies in the paper's theory and experiments, especially regarding image generation and continual learning tasks?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a unifying PAC theory proof, showing the potential of improving the generalization error of energy-based approaches by reducing the redundancy of the feature set. It combines the Rademacher complexity and the definition of v-diversity to compute the upper-bound of the gap between the true and empirical expectation of the energy. Some experiments are conducted to verify its claims. Strengths And Weaknesses Strength Well-organized paper with strong motivation. The proof framework is unifying, explicit, and easy to follow. The proposed simple regularization method is easy to be implemented and validated, and the experimental results are promising. Weaknesses The empirical evidence is not strong enough to show a direct correlation between the generalization error of energy-based approaches and the v-diversity defined in the paper. On one hand, the proposed regularization algorithm is only tested on some simple image datasets, e.g. MNIST, and CIFAR. On the other hand, this paper only reports limited experimental results on image generation and continual learning, which are inconsistent with the theory part, including regression, classification, and implicit regression. Therefore, in order to prove the consistency of theory and practice, authors should either provide some PAC theory proof of image generation and continual learning or conduct experiments on the regression, classification, and implicit regression tasks and report the detailed results compared with standard energy-based approaches. The definition of v-diversity is not well explained. I do not think it can represent the diversity of the feature set since ϕ i ( x ) and ϕ j ( x ) are at different positions of the feature set. Besides, in the paper, all the upper-bounds are correlated with ( D A 2 − v 2 ) , and the author claims that we can deduce the upper-bounds by increasing v . However, as v increases, A also has an upward tendency since increasing v could cause the increase in the magnitude of ϕ i ( x ) . Therefore, I don 't think these upper-bounds can explain the role of v-diversity. Clarity, Quality, Novelty And Reproducibility The paper is easy to follow, while the experiemental evaluations are weak and insufficient.
ICLR
Title On Feature Diversity in Energy-based Models Abstract Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative approaches. An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration. In this paper, we focus on the diversity of the produced feature set. We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs. We derive generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we show that indeed reducing redundancy of the feature set can consistently decrease the gap between the true and empirical expectation of the energy and boosts the performance of the model. 1 INTRODUCTION The energy-based learning paradigm was first proposed by Zhu & Mumford (1998); LeCun et al. (2006) as an alternative to probabilistic graphical models (Koller & Friedman, 2009). As their name suggests, energy-based models (EBMs) map each input ‘configuration’ to a single scalar, called the ‘energy’. In the learning phase, the parameters of the model are optimized by associating the desired configurations with small energy values and the undesired ones with higher energy values (Kumar et al., 2019; Song & Ermon, 2019; Yang et al., 2016). In the inference phase, given an incomplete input configuration, the energy surface is explored to find the remaining variables which yield the lowest energy. EBMs encapsulate solutions to several supervised approaches (LeCun et al., 2006; Fang & Liu, 2016) and unsupervised learning problems (Deng et al., 2020; Bakhtin et al., 2021; Zhao et al., 2020; Xu et al., 2022) and provide a common theoretical framework for many learning models, including traditional discriminative (Zhai et al., 2016; Li et al., 2020) and generative (Zhu & Mumford, 1998; Xie et al., 2017b; Zhao et al., 2017; Che et al., 2020; Khalifa et al., 2021) approaches. Formally, let us denote the energy function by E(h,x,y), where h = GW (x) represents the model with parameters W to be optimized during training and x,y are sets of variables. Figure 1 illustrates how classification, regression, and implicit regression can be expressed as EBMs. In Figure 1 (a), a regression scenario is presented. The input x, e.g., an image, is transformed using an inner model GW (x) and its distance, to the second input y is computed yielding the energy function. A valid energy function in this case can be the L1 or the L2 distance. In the binary classification case (Figure 1 (b)), the energy can be defined as E(h,x,y) = −yGW (x) . In the implicit regression case (Figure 1 (c)), we have two inner models and the energy can be defined as the L2 distance between their outputs E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. In the inference phase, given an input x, the label y∗ can be obtained by solving the following optimization problem: y∗ = argmin y E(h,x,y). (1) An EBM typically relies on an inner model, i.e., GW (x), to generate the desired energy landscape (LeCun et al., 2006). Depending on the problem at hand, this function can be constructed as a linear projection, a kernel method, or a neural network and its parameters are optimized in a data-driven manner in the training phase. Formally, GW (x) can be written as GW (x) = D∑ i wiϕi(x), (2) where {ϕ1(·), · · · , ϕD(·)} is the feature set, which can be hand-crafted, separately trained from unlabeled data (Zhang & LeCun, 2017), or modeled by a neural network and optimized in the training phase of the EBM model (Xie et al., 2016; Yu et al., 2020; Xie et al., 2021). In the rest of the paper, we assume that the inner models GW defined in the energy-based learning system (Figure 1) are obtained as a weighted sum of different features as expressed in equation 2. In (Zhang, 2013), it was shown that simply minimizing the empirical energy over the training data does not theoretically guarantee the minimization of the expected value of the true energy. Thus, developing and motivating novel regularization techniques is required (Zhang & LeCun, 2017). We argue that the quality of the feature set {ϕ1(·), · · · , ϕD(·)} plays a critical role in the overall performance of the global model. In this work, we extend the theoretical analysis of (Zhang, 2013) and focus on the ‘diversity’ of this set and its effect on the generalization ability of the EBM models. Intuitively, it is clear that a less correlated set of intermediate representations is richer and thus able to capture more complex patterns in the input. Thus, it is important to avoid redundant features for achieving a better performance. However, a theoretical analysis is missing. We start by quantifying the diversity of a set of feature functions. To this end, we introduce ϑ− τ -diversity: Definition 1 ((ϑ− τ )-diversity). A set of feature functions, {ϕ1(·), · · · , ϕD(·)} is called ϑ-diverse, if there exists a constant ϑ ∈ R, such that for every input x we have 1 2 D∑ i ̸=j (ϕi(x)− ϕj(x))2 ≥ ϑ2 (3) with a high probability τ . Intuitively, if two feature maps ϕi(·) and ϕj(·) are non-redundant, they have different outputs for the same input with a high probability. However, if, for example, the features are extracted using a neural network with a ReLU activation function, there is a high probability that some of the features associated with the input will be zero. Thus, defining a lower bound for the pair-wise diversity directly is impractical. Therefore, we quantify diversity as the lower-bound over the sum of the pair-wise distances of the feature maps as expressed in equation 3 and ϑ measures the diversity of a set. In machine learning context, diversity has been explored in ensemble learning (Li et al., 2012; Yu et al., 2011; Li et al., 2017), sampling (Derezinski et al., 2019; Bıyık et al., 2019), ranking (Wu et al., 2019; Qin & Zhu, 2013), pruning (Singh et al., 2020; Lee et al., 2020), and neural networks (Xie et al., 2015; Shen et al., 2021). In Xie et al. (2015; 2017a), it was shown theoretically and experimentally that avoiding redundancy over the weights of a neural network using the mutual angles as a diversity measure improves the generalization ability of the model. In this work, we explore a new line of research, where diversity is defined over the feature maps directly, using the (ϑ− τ )-diversity, in the context of energy-based learning. In (Zhao et al., 2017), a similar idea was empirically explored. A “repelling regularizer” was proposed to force non-redundant or orthogonal feature representations. Moreover, the idea of learning while avoiding redundancy has been used recently in the context of semi-supervised learning (Zbontar et al., 2021; Bardes et al., 2021). Reducing redundancy by minimizing the cross-correlation of features learned using a Siamese network (Zbontar et al., 2021) was empirically shown to improve the generalization ability, yet a theoretical analysis to prove this has so far been lacking. In this paper, we close the gap between empirical experience and theory. We theoretically study the generalization ability of EBMs in different learning contexts, i.e., regression, classification, implicit regression, and we derive new generalization bounds using the (ϑ−τ )-diversity providing theoretical guarantees that avoiding redundancy indeed improves the generalization ability of the model. The contributions of this paper can be summarized as follows: • We explore a new line of research, where diversity is defined over the features representing the input data and not over the model’s parameters. To this end, we introduce (ϑ − τ )- diversity as a quantification of the diversity of a given feature set. • We extend the theoretical analysis (Zhang, 2013) and study the effect of avoiding redundancy of a feature set on the generalization of EBMs (Lemmas 3 to 7 and Theorem 1 to 5). • We derive bounds for the expectation of the true energy in different learning contexts, i.e., regression, classification, and implicit regression, using different energy functions. Our analysis consistently shows that avoiding redundancy by increasing the diversity of the feature set can boost the performance of an EBM. 2 PAC-LEARNING OF EBMS WITH (ϑ− τ )-DIVERSITY In this section, we derive a qualitative justification for (ϑ−τ )-diversity using probably approximately correct (PAC) learning (Valiant, 1984; Mohri et al., 2018; Li et al., 2019). The PAC-based theory for standard EBMs has been established in (Zhang, 2013). First, we start by defining Rademacher complexity: Definition 2. (Bartlett & Mendelson, 2002; Mohri et al., 2018) For a given dataset with m samples S = {xi, yi}mi=1 from a distribution D and for a model space F : X → R with a single dimensional output, the Empirical Rademacher complexity R̂m(F) of the set F is defined as follows: R̂m(F) = Eσ [ sup f∈F 1 m m∑ i=1 σif(xi) ] , (4) where the Rademacher variables σ = {σ1, · · · , σm} are independent uniform random variables in {−1, 1}. The Rademacher complexity Rm(F) is defined as the expectation of the Empirical Rademacher complexity over training set, i.e., Rm(F) = ES∼Dm [R̂m(F)]. Based on this quantity, (Bartlett & Mendelson, 2002), several learning guarantees for EBMs have been shown (Zhang, 2013). We recall the following two lemmas related to the estimation error and the Rademacher complexity. In Lemma 2, we present the principal PAC-learning bound for energy functions with finite outputs. Lemma 1. (Wolf, 2018) For F ∈ RX , assume that g : R −→ R is a Lg-Lipschitz continuous function and A = {g ◦ f : f ∈ F}. We have Rm(A) ≤ LgRm(F). (5) Lemma 2. (Zhang, 2013) For a well-defined energy function E(h,x,y) over hypothesis class H, input set X and output set Y (LeCun et al., 2006), the following holds for all h in H with a probability of at least 1− δ E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 2Rm(E) +M √ log(2/δ) 2m , (6) where E is the energy function class defined as E = {E(h,x,y)|h ∈ H}, Rm(E) is its Rademacher complexity, and M is the upper bound of E . Lemma 2 provides a generalization bound for EBMs with well-defined (non-negative) and bounded energy. The expected energy is bounded using the sum of three terms: The first term is the empirical expectation of energy over the training data, the second term depends on the Rademacher complexity of the energy class, and the third term involves the number of the training data m and the upperbound of the energy function M . This shows that merely minimizing the empirical expectation of energy, i.e., the first term, may not yield a good approximation of the true expectation. In (Zhang & LeCun, 2017), it has been shown that regularization using unlabeled data reduces the second and third terms leading to better generalization. In this work, we express these two terms using the (ϑ− τ )-diversity and show that employing a diversity strategy may also decrease the gap between the true and empirical expectation of the energy. In Section 2.1, we consider the special case of regression and derive two bounds for two energy functions based on L1 and L2 distances. In Section 2.2, we derive a bound for the binary classification task using as energy function E(h,x,y) = −yGW (x) (LeCun et al., 2006). In Section 2.3, we consider the case of implicit regression, which encapsulates different learning problems such as metric learning, generative models, and denoising (LeCun et al., 2006). For this case, we use the L2 distance between the inner models as the energy function. In the rest of the paper, we denote the generalization gap, E(x,y)∼D[E(h,x,y)]− 1m ∑ (x,y)∈S E(h,x,y) by ∆D,SE. All the proofs are presented in the supplementary material. 2.1 REGRESSION TASK Regression can be formulated as an energy-based learning problem (Figure 1 (a)) using the inner model h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x). We assume that the feature set is positive and well-defined over the input domain X , i.e., ∀x ∈ X : ||Φ(x)||2 ≤ A, the hypothesis class can be defined as follows: H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, the output set Y ⊂ R is bounded, i.e., y < B, and the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ . The two valid energy functions which can be used for regression are E2(h,x,y) = 12 ||GW (x)−y|| 2 2 and E1(h,x,y) = ||GW (x)−y||1 (LeCun et al., 2006). We study these two cases separately and we show theoretically that for both energy functions avoiding redundancy improves generalization of the EBM model. ENERGY FUNCTION: E2 In this subsection, we present our theoretical analysis on the effect of diversity on the generalization ability of an EBM defined with the energy function E2(h,x,y) = 12 ||GW (x) − y|| 2 2. We start by the following two Lemmas 3 and 4. Lemma 3. With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2). (7) Lemma 4. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (8) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h, x, y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. Lemmas 3 and 4 bound the supremum of the output of the inner model and the energy function as a function of ϑ, respectively. As it can been seen, both terms are decreasing with respect to diversity. Next, we bound the Rademacher complexity of the energy class, i.e., Rm(E). Lemma 5. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F). (9) Lemma 5 expresses the bound of the Rademacher complexity of the energy class using the diversity constant and the Rademacher complexity of the features. Having expressed the different terms of Lemma 2 using diversity, we now present our main result for an energy-basel model trained defined using E2. The main result is presented in Theorem 1. Theorem 1. For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (10) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Theorem 1 express the special case of Lemma 2 using the (ϑ − τ )-diversity of the feature set {ϕ1(·), · · · , ϕD(·)}. As it can been seen, the bound of the generalization error is inversely proportional to ϑ2. This theoretically shows that reducing redundancy, i.e., increasing ϑ, reduces the gap between the true and the empirical energies and improves the generalization performance of the EBMs. ENERGY FUNCTION: E1 In this subsection, we consider the second case of regression using the energy function E1(h,x,y) = ||GW (x) − y||1. Similar to the previous case, we start by deriving bounds for the energy function and the Rademacher complexity of the class using diversity in Lemmas 6 and 7. Lemma 6. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (11) Lemma 7. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F). (12) Next, we derive the main result of the generalization of the EBMs defined using the energy function E1. The main finding is presented in Theorem 2. Theorem 2. For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (13) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Similar to Theorem 1, in Theorem 2, we consistently find that the bound of the true expectation of the energy is a decreasing function with respect to ϑ. This proves that for the regression task reducing redundancy can improve the generalization performance of the energy-based model. 2.2 BINARY CLASSIFIER Here, we consider the problem of binary classification, as illustrated in Figure 1 (b). Using the same assumption as in regression for the inner model, i.e., h(x) = GW (x) = ∑D i=1 wiϕi(x) = wTΦ(x), energy function of E(h,x,y) = −yGW (x) (LeCun et al., 2006), and the (ϑ−τ )-diversity of the feature set, we express Lemma 2 for this specific configuration in Theorem 3. Theorem 3. For the energy function E(h,x,y) = −yGW (x), over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: ∆D,SE ≤ 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m . (14) Similar to the regression task, we note that the upper-bound of the true expectation is a decreasing function with respect to the diversity term. Thus, a less redundant feature set, i.e., higher ϑ, has a lower upper-bound for the true energy. 2.3 IMPLICIT REGRESSION In this section, we consider the problem of implicit regression. This is a general formulation of a different set of problems such as metric learning, where the goal is to learn a distance function between two domains, image denoising, object detection as illustrated in (LeCun et al., 2006), or semi-supervised learning (Zbontar et al., 2021). This form of EBM (Figure 1 (c)) has two inner models, G1W (·) and G2W (·), which can be equal or different according to the problem at hand. Here, we consider the general case, where the two models correspond to two different combinations of different features, i.e., G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) and G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y). Thus, we have a different (ϑ− τ )-diversity term for each set. The final result is presented in Theorem 4. Theorem 4. For the energy function E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {h(1)(x) = G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), h(2)(x) = G (2) W (y) = ∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x : ||Φ(1)(x)||2 ≤ A(1), ∀y : ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H: ∆D,SE ≤ 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) +(J1 + J2) √ log(2/δ) 2m , (15) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . The upper-bound of the energy model depends on the diversity variable of both feature sets. Moreover, we note that the bound for the implicit regression decreases proportionally to ϑ2, as opposed to the classification case for example, where the bound is proportional to ϑ. Thus, we can conclude that reducing redundancy improves the generalization of EBM in the implicit regression context. 2.4 GENERAL DISCUSSION We note that the theory developed in our paper (Theorems 1 to 4) is agnostic to the loss function (LeCun et al., 2006) or the optimization strategy used (Kumar et al., 2019; Song & Ermon, 2019; Yu et al., 2020; Xu et al., 2022). We show that reducing the redundancy of the features consistently decreases the upper-bound of the true expectation of the energy and, thus, can boost the generalization performance of the energy-based model. It also should be noted that A, i.e., the upper bound of the features and ϑ are connected. But our findings can be interpreted as follows: given two models with the same value of A (maximum L2norm of the features), the model with higher diversity ϑ has a lower generalization bound and is likely to generalize better. We note that our analysis is independent of how the features are obtained, e.g., handcrafted or optimized. In fact, in the recent state-of-the-art EBMs (Khalifa et al., 2021; Bakhtin et al., 2021; Yu et al., 2020), the features are typically parameterized using a deep learning model and optimized during training. Our contribution is twofold. First, we provide theoretical guarantees that reducing redundancy in the feature space can indeed improve the generalization of the EBM. This can pave the way toward providing theoretical guarantees for WORKS ON SELF-SUPERVISED LEARNING using redundancy reduction Zbontar et al. (2021); Bardes et al. (2021); Zhao et al. (2017). Second, our theory can be used to motivate novel redundancy reduction strategies, for example, in the form of regularization, to avoid learning redundant features. Such strategies can improve the performance of the model and improve generalization. 3 SIMPLE REGULARIZATION ALGORITHM In general, theoretical generalization bounds can be too loose to be direct practical implications (Zhang et al., 2017; Neyshabur et al., 2017). However, they typically suggest a regularizer to promote some desired aspects of the hypothesis class (Xie et al., 2015; Li et al., 2019; Kawaguchi et al., 2017). Accordingly, inspired by the theoretical analysis in Section 2, we propose a straightforward strategy to avoid learning redundant features by regularizing the model during the training using a term inversely proportional to ϑ− τ -diversity of the features. Given an EBM model with a learnable feature set {ϕ1(·), · · · , ϕD(·)} and a training set S, we propose to augment the original training loss L as follows: Laug = L− β ∑ x∈S D∑ i̸=j (ϕi(x)− ϕj(x))2, (16) where β is a hyper-parameter controlling the contribution of the second term in the total loss. The additional term penalizes the similarities between the distinct features ensuring learning a diverse and non-redundant mapping of the data. As a result, this can improve the general performance of our model. 3.1 TOY EXAMPLE We test our regularization strategy first using a toy data. We use an EBM model to learn the distribution of a 2-D Swiss roll illustrated in Figure 2 (a). For the EBM, we use a fully connected neural network composed of two intermediate layers with 1000 units and ReLu activations. We train the models using Stochastic Gradient Langevin Dynamics (SGLD) sampling and the contrastive divergence-like algorithm proposed in (Du & Mordatch, 2019). The total objective of the standard EBM is expressed as follows: L = 1 N ∑ n ( α ( E(x+n ) 2 + E(x−n ) 2) + E(x+n )− E(x−n ) ) , (17) where x+n denote positive samples and x − n negative samples. We augment this loss using equation 16, i.e., the features are the latent representations obtained at the last intermediate layer. The distribution learned using both the standard and the proposed approach are illustrated using the kernel density estimation (Terrell & Scott, 1992) in Figure 2. As it can be seen, avoiding redundancy boosts the performance of the EBM model. Indeed, by comparing the two learned distributions, the EBM trained with our approach led to a better approximation of the ground-truth distribution and was able to better capture the tail of the distribution as opposed to the original EBM. 3.2 IMAGE GENERATION EXAMPLE Recently, there has been a high interest in using EBMs to solve image/text generation tasks Du & Mordatch (2019); Du et al. (2021); Khalifa et al. (2021); Deng et al. (2020). In this subsection, we validate the proposed regularizer on the simple example of MNIST digits image generation, as in (Du & Mordatch, 2019). For the EBM model, we use a simple CNN model composed of four convolutional layers followed by a linear layer. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics Markov chain Monte Carlo (MCMC) and a sampling buffer to accelerate training. The full details are available in the supplementary material. In this example, the features, i.e., the latent representation obtained at the last intermediate layer, are learned in an end-to-end way. We evaluate the performance of our approach by augmenting the contrastive divergence loss using equation 16 to penalize the feature redundancy. We quantitatively evaluate image quality of EBMs with ‘Fréchet Inception Distance’ (FID) score (Heusel et al., 2017) and the negative log-likelihood (NLL) loss in Table 1 for different values of β. We note that we obtain consistently better FID and NLL scores by penalizing the similarity of the learned features. The best performance is achieved by β = 1e−13, which yields more than 10%, in terms of FID, improvement compared to the original EBM model. To gain insights into the visual performance of our approach, we plot a few intermediate samples of the MCMC sampling (Langevin Dynamics). The results obtained by the EBM with β = 1e−13 are presented in Figure 3. Initiating from random noise, MCMC obtains reasonable figures after only 64 steps. The digits get clearer and more realistic over the iterations. More results are presented in the supplementary material. 3.3 CONTINUAL LEARNING EXAMPLE In this subsection, we validate the proposed regularizer on the Continual Learning (CL) problem. CL tackles the problem of catastrophic forgetting in deep learning models (Parisi et al., 2019; Li & Hoiem, 2017; Shibata et al., 2021). Its main goal is to solve several tasks sequentially without forgetting knowledge learned from the past. So, a continual learner is expected to learn a new task, crucially, without forgetting previous tasks. Recently, an EBM-based CL approach was proposed in (Li et al., 2020) and led to superior results compared to standard approaches. We use the same models and the same experimental protocol used in (Li et al., 2020). However, here we focus only on the class-incremental learning task using CIFAR10 and CIFAR100. We evaluate the performance of our proposed regularizer using both the boundary-aware and boundary-agnostic settings. As defined in (Li et al., 2020), the boundary-aware refers to the situation where the sequence of the tasks has explicit separation between them which is known to the model. The boundary agnostic case refers to the situation where the data distributions gradually changes without a notion of task boundaries. Similar to Section 3.2, we consider as ’features’ the representation obtained by the last intermediate layer. The proposed regularizer is applied on top of this representation. In Table 2, we report the performance of the EBM trained using the original loss and using the loss augmented with our additional term for different values of β. As shown in Table 2, penalizing feature similarity and promoting the diversity of the feature set boosts the performance of the EBM model and consistently leads to a superior accuracy for both datasets. In Figure 4, we display the accumulated classification accuracy, averaged over tasks, on the test set. Along the five tasks, our approach maintains higher classification accuracy than the standard EBM for both the boundary-aware and boundary-agnostic settings. 4 CONCLUSION Energy-based learning is a powerful learning paradigm that encapsulates various discriminative and generative systems. An EBM is typically formed of one (or many) inner models which learn a combination of different features to generate an energy mapping for each input configuration. In this paper, we introduced a feature diversity concept, i.e., (ϑ − τ )-diversity, and we used it to extend the PAC theory of EBMs. We derived different generalization bounds for various learning contexts, i.e., regression, classification, and implicit regression, with different energy functions and we consistently found that reducing the redundancy of the feature set can improve the generalization error of energy-based approaches. We also note that our theory is independent of the loss function or the training strategy used to optimize the parameters of the EBM. This provides theoretical guarantees on learning via feature redundancy reduction. Our preliminary experimental results confirm that this is indeed a promising research direction and can motivate developing other approaches to promoting the diversity of the feature set. Future direction include more extensive experimental evaluation of different feature redundancy reduction approaches. A PROOF OF LEMMA 3 Lemma With a probability of at least τ , we have sup x,W |h(x)| ≤ ||w||∞ √ (DA2 − ϑ2), (18) where A = supx ||ϕ(x)||2. Proof. h2(x) = ( D∑ i=1 wiϕi(x) )2 ≤ ( D∑ i=1 ||w||∞ϕi(x) )2 = ||w||2∞ ( D∑ i=1 ϕi(x) )2 = ||w||2∞ (∑ i,j ϕi(x)ϕj(x) ) = ||w||2∞ ∑ i ϕi(x) 2 + ∑ i ̸=j ϕi(x)ϕj(x) (19) We have ||Φ(x)||2 ≤ A. For the first term in equation 19, we have ∑ m ϕm(x) 2 ≤ A2. By using the identity ϕm(x)ϕn(x) = 12 ( ϕm(x) 2 + ϕn(x) 2 − (ϕm(x)− ϕn(x))2 ) , the second term can be rewritten as∑ m ̸=n ϕm(x)ϕn(x) = 1 2 ∑ m̸=n ( ϕm(x) 2 + ϕn(x) 2 − ( ϕm(x)− ϕn(x) )2) . (20) In addition, we have with a probability τ , 12 ∑ m ̸=n(ϕm(x)− ϕn(x))2 ≥ ϑ2. Thus, we have with a probability at least τ :∑ m̸=n ϕm(x)ϕn(x) ≤ 1 2 (2(D − 1)A2 − 2ϑ2) = (D − 1)A2 − ϑ2. (21) By putting everything back to equation 19, we have with a probability τ , G2W (x) ≤ ||w||2∞ ( A2 + (D − 1)A2 − ϑ2 ) = ||w||2∞(DA2 − ϑ2). (22) Thus, with a probability τ , sup x,W |h(x)| ≤ √ sup x,W G2W (x) ≤ ||w||∞ √ DA2 − ϑ2. (23) B PROOF OF LEMMA 4 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ 1 2 (||w||∞ √ (DA2 − ϑ2) +B)2. (24) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). Thus supx,y,h|E(h,x,y)| ≤ 12 (||w||∞ √ DA2 − ϑ2 +B)2. C PROOF OF LEMMA 5 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞(||w||∞ √ (DA2 − ϑ2) +B)Rm(F) (25) Proof. Using the decomposition property of the Rademacher complexity (if ϕ is a L-Lipschitz function, then Rm(ϕ(A)) ≤ LRm(A)) and given that 12 ||. − y|| 2 is K-Lipschitz with a constant K = supx,y,h||h(x) − y|| ≤ (||w||∞ √ DA2 − ϑ2 + B), we have Rm(E) ≤ KRm(H) = (||w||∞ √ DA2 − ϑ2 + B)Rm(H), where H = {GW (x) = ∑D i=1 wiϕi(x) }. We also know that ||w||1 ≤ D||w||∞. Next, similar to the proof of Theorem 2.10 in (Wolf, 2018), we note that ∑D i=1 wiϕi(x) ∈ (D||w||∞)conv(F + −(F)) := G, where conv denotes the convex hull and F is the set of ϕ functions. Thus, Rm(H) ≤ Rm(G) = D||w||∞Rm(conv(F + (−F)) = D||w||∞Rm(F + (−F)) = 2D||w||∞Rm(F). D PROOF OF THEOREM 1 Theorem For the energy function E(h,x,y) = 12 ||GW (x) − y|| 2 2, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x : ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞(||w||∞ √ DA2 − ϑ2 +B)Rm(F) + 1 2 (||w||∞ √ DA2 − ϑ2 +B)2 √ log(2/δ) 2m , (26) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 4 and Lemma 5. E PROOF OF LEMMA 6 Lemma With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ (||w||∞ √ DA2 − ϑ2 +B). (27) Proof. We have supx,y,h |h(x)− y| ≤ supx,y,h(|h(x)|+ |y|) = (||w||∞ √ DA2 − ϑ2 +B). F PROOF OF LEMMA 7 Lemma With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (28) Proof. |.| is 1-Lipschitz, Thus Rm(E) ≤ Rm(H). G PROOF OF THEOREM 2 Theorem For the energy function E(h,x,y) = ||GW (x) − y||1, over the input set X ∈ RN , hypothesis class H = {h(x) = GW (x) = ∑D i=1 wiϕi(x) = w TΦ(x) | Φ ∈ F , ∀x ||Φ(x)||2 ≤ A}, and output set Y ⊂ R, if the feature set {ϕ1(·), · · · , ϕD(·)} is ϑ-diverse with a probability τ , then with a probability of at least (1− δ)τ , the following holds for all h in H: E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + (||w||∞ √ DA2 − ϑ2 +B) √ log(2/δ) 2m , (29) where B is the upper-bound of Y , i.e., y ≤ B, ∀y ∈ Y . Proof. We replace the variables in Lemma 1 using Lemma 6 and Lemma 7. H PROOF OF THEOREM 3 Lemma 8. With a probability of at least τ , we have sup x,y,h |E(h,x,y)| ≤ ||w||∞ √ DA2 − ϑ2. (30) Proof. We have sup−yGW (x) ≤ sup |GW (x)| ≤ ||w||∞ √ DA2 − ϑ2. Lemma 9. With a probability of at least τ , we have Rm(E) ≤ 2D||w||∞Rm(F) (31) Proof. We note that for y ∈ {−1, 1}, σ and −yσ follow the same distribution. Thus, we have Rm(E) = Rm(H). Next, we note that Rm(H) ≤ 2D||w||∞Rm(F). Theorem 3 For a well-defined energy function E(h,x,y) (LeCun et al., 2006), over hypothesis class H, input set X and output set Y , if it has upper-bound M, then with a probability of at least 1− δ, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 4D||w||∞Rm(F) + ||w||∞ √ DA2 − ϑ2 √ log(2/δ) 2m , (32) Proof. We replace the variables in Lemma 1 using Lemma 8 and Lemma 9. I PROOF OF THEOREM 4 Lemma 10. With a probability of at least τ1τ2, we have sup x,y,h |E(h,x,y)| ≤ ( J1 + J2 ) (33) Proof. We have ||G(1)W (x) − G (2) W (y)||22 ≤ 2(||G (1) W (x)||22 + ||G (2) W (y)||22). Similar to Theorem 1, we have sup ||G(1)W (x)||22 ≤ ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) = J1 and sup ||G(2)W (y)||22 ≤ ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) = J2. We also have E(h,x,y) = 12 ||G (1) W (x)−G (2) W (y)||22. Lemma 11. With a probability of at least τ1τ2, we have Rm(E) ≤ 4( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) (34) Proof. Let f be the square function, i.e., f(x) = 12x 2 and E0 = {G(1)W (x) − G (2) W (y) | x ∈ X , y ∈ Y}. We have E = f(E0 + (−E0)). f is Lipschitz over the input space, with a constant L bounded by supx,W G (1) W (x) + supy,W G (2) W (y) ≤ √ J1 + √ J2. Thus, we have Rm(E) ≤ ( √ J1 + √ J2)Rm(E0 + (−E0)) ≤ 2( √ J1 + √ J2)Rm(E0). Next, we note that Rm(E0) = Rm(H1 + (−H2)) = Rm(H1) + Rm(H2). Using same as technique as in Lemma 4, we have Rm(H1) ≤ 2D(1)||w(1)||∞Rm(F1) and Rm(H2) ≤ 2D(2)||w(2)||∞Rm(F2). Theorem 4 For the energy function E(h,x,y) = 12 ||G (1) W (x) − G (2) W (y)||22, over the input set X ∈ RN , hypothesis class H = {G(1)W (x) = ∑D(1) i=1 w (1) i ϕ (1) i (x) = w (1)TΦ(1)(x), G (2) W (y) =∑D(2) i=1 w (2) i ϕ (2) i (y) = w (2)TΦ(2)(y) | Φ(1) ∈ F1, Φ(2) ∈ F2, ∀x ||Φ(1)(x)||2 ≤ A(1), ∀y ||Φ(2)(y)||2 ≤ A(2)}, and output set Y ⊂ RN , if the feature set {ϕ(1)1 (·), · · · , ϕ (1) D(1) (·)} is ϑ(1)-diverse with a probability τ1 and the feature set {ϕ(2)1 (·), · · · , ϕ (2) D(2) (·)} is ϑ(2)-diverse with a probability τ2, then with a probability of at least (1− δ)τ1τ2, the following holds for all h in H E(x,y)∼D[E(h,x,y)] ≤ 1 m ∑ (x,y)∈S E(h,x,y) + 8( √ J1 + √ J2) ( D(1)||w(1)||∞Rm(F1) +D(2)||w(2)||∞Rm(F2) ) + ( J1 + J2 )√ log(2/δ) 2m , (35) where J1 = ||w(1)||2∞ ( D(1)A(1) 2 − ϑ(1)2 ) and J2 = ||w(2)||2∞ ( D(2)A(2) 2 − ϑ(2)2 ) . Proof. We replace the variables in Lemma 1 using Lemma 10 and Lemma 11. J IMAGE GENERATION EXAMPLE SETTINGS AND ADDITIONAL RESULTS For the EBM model, we used a simple CNN model composed of four convolutional layers followed by a linear layer. The full CNN model is presented in Table 3. The training protocol is the same as in (UvA; Du & Mordatch, 2019), i.e., using Langevin dynamics MCMC and a sampling buffer to accelerate training. All models were trained for 60 epochs using Adam optimizer with learning rate lr = 1e − 4 and a batch size of 128. In addition to the results presented in the paper, Figure 5 presents additional qualitative results. For the first two examples (top ones), the model is able to converge to a realistic image within reasonable amount of iterations. For the last two examples (in the bottom), we present failure cases of our approach. For these two tests, the generated image still improves over iterations. However, the model failed to converge to a clear realistic MNIST image after 256 steps.
1. What is the main contribution of the paper regarding energy-based models' generalization ability? 2. How does the paper enhance feature diversity in inner models, and what are the theoretical upper bounds provided? 3. Are there any concerns or suggestions regarding the tightness of the upper bounds or the sufficiency of the experiments? 4. How does the paper define diversity, and are there any alternative diversity tools that could be used? 5. Are there any questions about the paper's clarity, quality, novelty, and reproducibility?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the influence of feature diversity on the generalization of energy-based models (EBM), which refers to the gap between the estimated energy function and the true energy distribution. The authors propose to improve the performance of Energy-Based Models (EBMs) by enhancing the feature diversity of the inner model. It provides a PAC theory upper-bound (correlated with the feature diversity) of the gap between the true and empirical expectation of the energy for three kinds of tasks. To verify the effectiveness of the upper-bounds, it designs a simple regularization algorithm, which shows performance gain compared with standard EBMs. Strengths And Weaknesses Strength This paper studies the generalization ability of EBMs through the lens of feature diversity, which is important due to the wild applications of EBMs The proposed theory and method are clearly motivated and described. The theoretical proof is based on Rademacher complexity and is easy to follow. It successfully deduces the upper-bounds for three different forms of EBMs under the unifying framework and the experimental results are promising. Weaknesses The deduced upper-bounds are typically loose and might not be able to guide the practice. Hence, it is better to have some theoretical or intuitive explanations about how tight those upper-bounds are. The experiments in this paper is kind of insufficient. This paper only includes some experimental results on image generation and continual learning, which are not consistent with its theoretical proof of regression, classification, and implicit regression tasks. Therefore, experimental results on the above three tasks should be reported clearly to verify the theory. Besides, the authors should conduct some experiments on more realistic and challenging datasets, such as ImageNet, CelebA etc. Comparison between prior work is lacked . For example, EBGAN [Zhao et al. 2017] uses a similar regularization based on cosine distance for diversity generation. Validity of the diversity definition 1. I am wondering whether Def. 1 is a good measure of diversity, especially when the diversity should be used for a set of samples S = x 1 , … , x n , as indicated in Eq. 16. In Eq. 16, you simply sum of the contributions of all samples. However, some holistic information might be lost in this way. There are well-established diversity tools in the literature, for example, the DPPs, see a nice survey and the references therein [Kulesza- Taskar 2012]. To apply DPP here, for one feature i , it is sensible to take the response of all n samples as its representation vector [ ϕ i ( x 1 ) , … ϕ i ( x n ) ] T , then you can construct the DPP diversity metric by choose some kernel measuring the similarity. This might give rise a more reasonable diversity metric since DPP has a clear interpretation as the square of volume spanned by the representation vectors. Can you comment on the above possibilities? There seems to be some misuse of symbols in the paper. For example, it uses the sample symbol R_m to represent both empirical Rademacher complexity (in Definition 2) and the standard Rademacher complexity (in Lemma 2). The writting needs to be further checked and polished. References Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. ICLR, 2017. Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. Foundations and Trends® in Machine Learning, 5(2–3):123–286, 2012 Clarity, Quality, Novelty And Reproducibility See the comments above
ICLR
Title Dependency Structure Discovery from Interventions Abstract Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides much richer information about the underlying data-generating process. However, the extension and application of methods designed for observational data to include interventions is not straightforward and remains an open problem. In this paper we provide a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data. The proposed method is applicable even in the challenging and realistic case that the identity of the intervened upon variable is unknown. We examine the proposed method in the setting of graph recovery both de novo and from a partially-known edge set. We establish strong benchmark results on several structure learning tasks, including structure recovery of both synthetic graphs as well as standard graphs from the Bayesian Network Repository. 1 INTRODUCTION Structure learning concerns itself with the recovery of the graph structure of Bayesian networks (BNs) from data samples. A natural application of Bayesian networks is to describe cause-effect relationships between variables. In that context, one may speak of causal structure learning. Causal structure learning is challenging because purely observational data may be satisfactorily explained by multiple Bayesian networks (a Markov equivalence class), but only one is the most robust to distributional shifts: The one with the correct graph. A more powerful tool than BNs is thus needed to model causal relationships. Structural Causal Models (SCMs) are that tool. An SCM over a set of random variables is a collection of assignments to these variables and a directed acyclic graph of dependencies between them (Peters et al., 2017, §6.2). Each assignment is a function of only the direct causes of a variable, plus an independent noise source. An SCM entails precisely one (observational) data distribution. Interventions on an SCM’s assignments, such as setting a random variable to a fixed value (a hard intervention), entail new interventional data distributions (Peters et al., 2017, §6.3). SCMs can be used to answer higher-order questions of cause-and-effect, up the ladder of causation (Pearl & Mackenzie, 2018). Causal structure learning using SCMs has been attempted in several disciplines including biology (Sachs et al., 2005; Hill et al., 2016), weather forecasting (Abramson et al., 1996) and medicine (Lauritzen & Spiegelhalter, 1988; Korb & Nicholson, 2010). Causal structure is most frequently learned from data drawn from observational distributions. Structure learning methods generally cannot do more than identify the causal graph up to a Markov equivalence class (Spirtes et al., 2000). In order to fully identify the true causal graph, a method must either make restrictive assumptions about the underlying data-generating process, such as linear but non-Gaussian data (Shimizu et al., 2006), or must access enough data from outside the observational distribution (i.e., from interventions). Under certain assumptions about the number, diversity, and nature of the interventions, the true underlying causal graph is always identifiable, given that the method knows the intervention performed (Heckerman et al., 1995). In much of the prior work on causal model induction it is assumed that there is an experimenter and this experimenter performs interventions. However, in the real world, interventions can also be performed by other agents, which could lead to unknown interventions (interventions with unknown target variables). A few works have attempted to learn structures from unknown-intervention data (Eaton & Murphy, 2007a; Squires et al., 2020; Huang et al., 2020). A notable such work, (Mooij et al., 2016), has been extended in (Kocaoglu et al., 2019; Jaber et al., 2020). Although there is no theoretical guarantee that the true causal graph can be identified in that setting, evidence so far points to that still being the case. Another common setting is when the graph structure is partially provided, but must be completed. An example is protein structure learning in biology, where we may have definitive knowledge of some causal edges in the protein-protein interactome, but the remaining causal edges must be discovered. We will call this setting “partial graph completion”. This is an easier task compared to learning the entire graph, since it limits the number of edges that have to be learned. Recently, a flurry of work on structure learning using continuous optimization methods has appeared (Zheng et al., 2018; Yu et al., 2019). These methods operate on observational data and are competitive with other methods. Because of the theoretical limitations on identification from purely observational data cited above, it would be interesting to extend these methods to interventional data. However, it is not straightforward to apply continuous optimization methods to structure learning from interventional data. Our key contributions are to answer the following questions experimentally: 1. Can the proposed model recover true causal structure? Yes, see Figure §4. 2. How does the proposed model compare against state of the art causal methods on real-world datasets? Favourably; see §5.4 and Table §1. 3. Does a proposed model generalize well to unseen interventions? Yes, see §5.5. 4. How does the proposed model perform on partial graph recovery? It scales to∼ 50 variables while the other baselines can’t. see §5.7. 2 PRELIMINARIES Causal modeling. A Structural Causal Model (SCM) (Peters et al., 2017) over a finite number M of random variables Xi is a set of structural assignments Xi := fi(Xpa(i,C), Ni) , ∀i ∈ {0, . . . ,M − 1} (1) Identifiability. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph structure, interventional data is needed (Eberhardt et al., 2012). Interventions. There are several types of common interventions which may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft intervention because they include hard intervention as a limiting case and hence are more general. Structure discovery using continuous optimization. Structure discovery is a super-exponential search problem that searches though all possible directed acyclic graphs (DAGs). Previous continuousoptimization structure learning works (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) mitigate the problem of searching in the super-exponential set of graph structures by considering the degree to which a hypothesis graph violates “DAG-ness” as an additional penalty to be optimized. If there are M such variables, the strategy of considering all the possible structural graphs as separate hypotheses is not feasible because it would require maintaining O(2M 2 ) models of the data.2 3 RELATED WORK The recovery of the underlying structural causal graph from observational and interventional data is a fundamental problem (Pearl, 1995; 2009; Spirtes et al., 2000). Different approaches have been studied: score-based, constraint-based, asymmetry-based and continuous optimization methods. Score-based methods search through the space of all possible directed acyclic graphs (DAGs) representing the causal structure based on some form of scoring function for network structures (Heckerman et al., 1995; Chickering, 2002; Tsamardinos et al., 2006; Hauser & Bühlmann, 2012; Goudet et al., 2017; Cooper & Yoo, 1999; Zhu & Chen, 2019). Constraint-based methods (Spirtes et al., 2000; Sun et al., 2007; Zhang et al., 2012; Monti et al., 2019; Zhu & Chen, 2019) infer the DAG by analyzing conditional independences in the data. Eaton & Murphy (2007c) use dynamic programming techniques to accelerate Markov Chain Monte Carlo (MCMC) sampling in a Bayesian approach to structure learning for discrete variable DAGs. Peters et al. (2016); Ghassami et al. (2017); Rojas-Carulla et al. (2018) exploit invariance across environments to infer causal structure, which faces difficulty scaling due to the iteration over the super-exponential set of possible graphs. Recently, (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) framed the structure search as a continuous optimization problem, however, the methods only uses observational data and are non-trivial to extend to interventional data. In our paper, we present a method that uses continuous optimization methods that works on both observational and interventional data. For interventional data, it is often assumed that the models have access to full intervention information, which is rare in the real world. Rothenhäusler et al. (2015) have investigated the case of additive shift interventions, while Eaton & Murphy (2007b) have examined the situation where the targets of experimental interventions are imperfect or uncertain. This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment. Learning based methods have been proposed (Guyon, 2013; 2014; Lopez-Paz et al., 2015) and there also exist recent approaches using the generalization ability of neural networks to learn causal signals from purely observational data (Kalainathan et al., 2018; Goudet et al., 2018). Neural network methods equipped with learned masks, such as (Ivanov et al., 2018; Li et al., 2019; Yoon et al., 2018; Douglas et al., 2017), exist in the literature, but only a few (Kalainathan et al., 2018) have been adapted to causal inference. This last work is, however, tailored for causal inference on continuous variables and from observations only. Adapting it to a discrete-variable setting is made difficult by its use of a Generative Adversarial Network (GAN) Goodfellow et al. (2014) framework. 4 STRUCTURE DISCOVERY FROM INTERVENTIONS METHOD Scope of Applicability and Objective. The proposed method, like any structure learning algorithm, assumes the availability of a data-generating process based on ancestral sampling of a ground-truth SCM of M variables, which can be queried for samples. The SCM supports applying and retracting known or unknown interventions. The method can support infinite- or finite-data as well as infiniteor finite-intervention regimes. The objective is, then, to learn the SCM’s structure from the insights that each intervention gives about cause-effect relationships between variables in the SCM. 4.1 PROBLEM SETTING AND ASSUMPTIONS In this paper, we restrict the problem setting to specific, but still broad classes of SCMs and interventions. In particular, we assume that: Data is discrete-valued. The SCM’s random variables are all categorical. Causal sufficiency. For every data sample, the value of all variables are available; There are no latent confounders. Interventions are localized. They affect only a single variable (but which one may not be known). Interventions are soft. An intervention does not necessarily pin its target random variable to a fixed value (though it may, as a special case). It changes the relationship of a variable with its parents. Interventions do not stack. Before a new intervention is made, the previous one is fully retracted. This stops the SCM from wandering away from its initial, observational configuration after a long series of interventions. No control over interventions. The structure learning algorithm has control neither of the target, nor the nature of the next intervention on the SCM. For a detailed description of the interventions, refer to §A.2. 4.2 VARIATIONS AND PRIOR KNOWLEDGE In the problem setting above, the ground-truth SCM is completely opaque to us. However, we consider two interesting relaxations of this formulation: Complete or partial graph recovery. We may already know the existence of certain cause-effect edges and non-edges within the ground-truth SCM. If such prior information is available, it turns a complete graph recovery problem into one of partial graph recovery. Larger SCMs can be tackled if only parts of the graph need to be recovered. Known or unknown interventions: The interventions can either be known or unknown to the learned model. We demonstrate that the proposed method can naturally incorporate this prior information to improve its performance. 4.3 METHOD OVERVIEW The proposed method is a score-based, iterative, continuousoptimization method consisting of three phases that flow into one other (See Figure 2). During the three-phase procedure, a structural representation of a DAG and a functional representation of a set of independent causal mechanisms are trained jointly until convergence. Because the structural and functional parameters are not independent and do influence each other, we train them in alternating phases, a form of block coordinate descent optimization. 4.3.1 PARAMETRIZATION We distinguish two sets of parameters: The structural parameters γ and the functional parameters θ. Given a graph of M variables, we parametrize the structure γ as a matrix RM×M such that σ(γij) is our belief in random variable Xj being a direct cause of Xi, where σ(x) = 1/(1 + exp(−x)) is the sigmoid function. The matrix σ(γ) is thus a soft adjacency matrix. The set of functional parameters θi parametrizes the conditional probability distribution of Xi given its parent set Xpa(i,C), with C ∼ Ber(σ(γ)) a hypothesized configuration of the SCM’s DAG. 4.3.2 PHASE 1: GRAPH FITTING ON OBSERVATIONAL DATA During Phase 1, the functional parameters θ are trained to maximize the likelihood of randomly drawn observational data under graphs randomly drawn from our current beliefs about the edge structure. We draw graph configurations Cij ∼ Ber(σ(γij)) and batches of observational data from the unintervened ground-truth SCM, then maximize the log-likelihood of the batch under that configuration using SGD. The use of graph configurations sampling from Bernoulli distributions is analogous to dropout on the inputs of the functional models (in our implementation, MLPs), giving us an ensemble of neural networks that can model the observational data. 4.3.3 PHASE 2: GRAPH SCORING ON INTERVENTIONAL DATA During Phase 2, a number of graph configurations are sampled from the current edge beliefs parametrized by γ, and scored on data samples drawn from the intervention SCM. Intervention applied: At the beginning of Phase 2, an intervention is applied to the ground-truth SCM. This intervention is not under the control of the method. In our implementation, and unbeknownst to the model, the target variable is chosen uniformly randomly from all M variables throughout the optimization process. Intervention predicted: If the target of the intervention is not known, it is predicted using a simple heuristic. A small number of interventional data samples are drawn from the SCM and more graphs are sampled from our current edge beliefs. The average log-likelihood of each individual variable Xi across the samples is then computed using the functional model parameters θ fine-tuned on observational data in Phase 1. The variable Xi showing the greatest deterioration in log-likelihood is assumed to be the target because the observational distribution most poorly predicts that variable. If the target of the intervention is known, then this is taken as ground-truth knowledge for the purpose of subsequent steps, and no prediction needs to be done. Graphs Sampled and Scored: A new set of interventional data samples and graph configurations are now drawn from the intervention SCM and edge beliefs respectively. The log-likelihood of the data batches under the hypothesized configurations is computed, with one modification: The contribution to the total log-likelihood of a sample X coming from the target (or predicted-target) intervention variable Xi is masked. Because Xi was intervened upon (in the manner of a Pearl do-operation, soft or hard), the values one gets for that variable should be taken as givens, not as contributors to the total log-likelihood of the sample. As well, no gradient should be allowed to propagate into the variable’s learned functional parametrization θi, because it was not actually responsible for the outcome. Intervention retracted: After Phase 2, the intervention is retracted, per our modelling assumptions. 4.3.4 PHASE 3: CREDIT ASSIGNMENT TO STRUCTURAL PARAMETERS During Phase 3, the scores of the interventional data batches over various graph configurations are aggregated into a gradient for the structural parameters γ. Because a discrete Bernoulli random sampling process was used to sample graph configurations under which the log-likelihoods were computed, we require a gradient estimator to propagate gradient through to the γ structural parameters. Several alternatives exist, but we adopt for this purpose the REINFORCE-like gradient estimator gij proposed by Bengio et al. (2019): gij = ∑ k(σ(γij)− c (k) ij )LC(k),i (X)∑ k LC(k),i (X) , ∀i, j ∈ {0, . . . ,M−1} (2) where the (k) superscript indicates the values obtained for the k-th draw of C under the current edge beliefs parametrized by γ. Therefore, L(k)C,i(X) can be read as the log-likelihood of variable Xi in the data sample X under the k’th configuration, C(k), drawn from our edge beliefs. Using the estimated gradient, we then update γ with SGD, and return to Phase 1 of the continuous optimization process. The gradient estimator gij minimizes an implicit empirical risk objective with respect to γij . When the functional and structural parameters θ and γ are “sufficiently close” to their minima, the estimator gij empirically converges quickly towards that minimum γ∗ as shown in Figure 16 of Appendix A.13. Acyclic Constraint: We include a regularization term JDAG(γ) that penalizes length-2 cycles in the learned adjacency matrix σ(γ), with a tunable strength λDAG. The regularization term is JDAG(γ) =∑ i 6=j cosh(σ(γij)σ(γji)), ∀i, j ∈ {0, . . . ,M−1} and is derived from Zheng et al. (2018). The details of the derivation are in the Appendix. We explore several different values of λDAG and their effects in our experimental setup. Suppression of longer-length cycles was not found to be worthwhile for the increased computational expense. 5 EXPERIMENTAL SETUP AND RESULTS We first evaluate the proposed method on a synthetic dataset where we have control over the number of variables and causal edges in the ground-truth SCM. This allows us to analyze the performance of the proposed method under various conditions. We then evaluate the proposed method on real-world datasets from the BnLearn dataset repository. We also consider the two variations of §4.2: Recovering only part of the graph (when the rest is known), and exploiting knowledge of the intervention target. The summary of our findings is: 1) We show strong results for graph recovery for all synthetic graphs in comparisons with other baselines, measured by Hamming distance. 2) The proposed method achieves high accuracy on partial graph recovery for large, real-world graphs. 3) The proposed method’s intervention target prediction heuristic closes the gap between the known- and unknowntarget intervention scenarios. 4) The proposed method generalizes well to unseen interventions. 5) The proposed method’s time-to-solution scaling appears to be driven by the number of edges in the groundtruth graph moreso than the number of variables. 5.1 MODEL DESCRIPTION Learner model. Without loss of generality, we let θi = {W0i,B0i,W1i,B1i} define a stack of M one-hidden-layer MLPs, one for each random variable Xi. A more appropriate model, such as a CNN, can be chosen using domainspecific knowledge; the primary advantage of using MLPs is that the hypothesized DAG configurations cij can be readily used to mask the inputs of MLP i, as shown in Figure 3. To force the structural equation fi corresponding to Xi to rely exclusively on its direct ancestor set pa(i, C) under hypothesis adjacency matrix C (See Eqn. 1), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . An example of the multi-MLP architecture with M=4 categorical variables of N=3 categories is shown in Figure 3. For more details, refer to Appendix A.4. Ground-truth model. Ground-truth SCM models are parametrized either as CPTs with parameters from BnLearn (in the case of real-world graphs), or as a second stack of MLPs similar to the learner model, with randomly-initialized functional parameters θGT and the desired adjacency matrix γGT. Interventions. In all experiments, at most one (soft) intervention is concurrently performed. To simulate a soft intervention on variable Xi, we reinitialize its ground-truth conditional distribution’s MLP parameters or CPT table randomly, while leaving the other variables untouched. For more details about the interventions, please refer to Appendix A.2. 5.2 SYNTHETIC DATASETS EXPERIMENTS We first evaluate the model’s performance on several randomlyinitialized SCMs with specific, representative graph structures. Since the number of possible DAGs grows super-exponentially with the number of variables, for M=4 up to 13 a selection of representative and edge-case DAGs are chosen. chainM and fullM (M=3-13) are the minimallyand maximally-connected M -variable DAGs, while treeM and jungleM are tree-like intermediate graphs. colliderM is the (M−1)→ 1 collider graph. The details of the setup is in Appendix A.6. Results. The model can recover most synthetic DAGs with high accuracy, as measured by Structural Hamming Distance (SHD) between learned and ground-truth DAGs. Table 1 shows our proposed method outperforming all other baseline methods, and learns all graphs perfectly for 3 to 13 variables (excepting full). For DAGs ranging from 3 to 8 variables, the AUROCs all eventually reach 1.0 (indicating perfect classification into edge/non-edge; Refer to Figure 4). For both large (M > 10) and dense DAGs (e.g. full13) the model begins encountering difficulties, as shown in Table 1 and Appendix §A.6.1. Small graphs (M < 10) are less sensitive than larger ones to our hyperparameters, notably the sparsity and acyclic regularization (§4.3.4) terms. In §A.5, we perform an analysis of these hyperparameters. 5.3 REAL-WORLD DATASETS: BNLEARN The Bayesian Network Repository is a collection of commonly-used causal Bayesian networks from the literature, suitable for Bayesian and causal learning benchmarks. We evaluate the proposed method on the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010), Asia (Lauritzen & Spiegelhalter, 1988) and Sachs (Sachs et al., 2005) datasets (M =5, 5, 8 and 11-variables respectively, maximum in-degree 3) in the BnLearn dataset repository. Results. As shown in Table 1, the proposed method perfectly recovers the DAG of Asia, while making a small number of errors (SHD=6) for Sachs (11-variables). It thus significantly outperforms all other baselines models. Figures 8 & 9 visualize what the model has learned at several stages of learning. Results for Cancer and Asia can be found in the appendices, Figure 17 and 18. 5.4 COMPARISONS WITH OTHER METHODS As shown in Table 1, we compared the proposed SDI method to ICP ((Peters et al., 2016)), non-linear ICP ((Heinze-Deml et al., 2018b)), and (Eaton & Murphy, 2007b; Zheng et al., 2018; Yu et al., 2019) on Asia (Lauritzen & Spiegelhalter, 1988), Sachs (Sachs et al., 2005) and representative synthetic graphs. Eaton & Murphy (2007b) handles uncertain interventions and Peters et al. (2016), Heinze-Deml et al. (2018b) handles unknown interventions. However, neither attempts to predict the intervention. As shown in Table 1, we significantly outperform ICP, non-linear ICP, and the methods in (Yu et al., 2019) and (Zheng et al., 2018). Furthermore, Eaton & Murphy (2007b) runs out of memory for graphs larger than M = 10 because modelling of uncertain interventions is done using “shadow” random variables (as suggested by the authors), and thus recovering the DAG internally requires solving a d = 2M -variable problem. Their method’s extremely poor time- and space-scaling of O(d2d) makes it unusable beyond d > 20. For SDIs, we threshold our edge beliefs at σ(γ) = 0.5 to derive a graph, but the continued decrease of the cross-entropy loss (Figure 4) hints at SDI’s convergence onto the correct causal model. Please refer to Appendix §A.8 for full details and results. 5.5 GENERALIZATION TO PREVIOUSLY UNSEEN INTERVENTIONS It is often argued that machine learning approaches based purely on capturing joint distributions do not necessarily yield models that generalize to unseen experiments, since they do not explicitly model changes through interventions. By way of contrast, causal models use the concept of interventions to explicitly model changing environments and thus hold the promise of robustness under distributional shifts (Pearl, 2009; Schölkopf et al., 2012; Peters et al., 2017). To test the robustness of causal modelling to previously unseen interventions (new values for an intervened variable), we evaluate a well-trained causal model against a variant, non-causal model trained with cij = 1, i 6= j. An intervention is performed on the ground-truth SCM, fresh interventional data is drawn from it, and the models, with knowledge of the intervention target, are asked to predict the other variables given their parents. The average log-likelihoods of the data under both models are computed and contrasted. The intervention variable’s contribution to the loglikelihood is masked. For all 3-variable graphs (chain3, fork3, collider3, confounder3), the causal model attributes higher log-likelihood to the intervention distribution’s samples than the non-causal variant, thereby demonstrating causal models’ superior generalization ability in transfer tasks. Table 2 collects these results. 5.6 VARIANT: PREDICTING INTERVENTIONS In Phase 2 (§4.3.3), we use a simple heuristic to predict the intervention target variable. Experiments show that this heuristic functions well in practice, yielding correct predictions far more often than by chance alone (Table 3). Guessing the intervention variable randomly, or not guessing it at all, leads to a significant drop in the model performance, even for 3-variable graphs (Fig. 11 Left). Training SDI with intervention prediction closely tracks training with leaked knowledge of the ground-truth intervention on larger, 7-variable graphs (Fig. 11 Right). 5.7 VARIANT: PARTIAL GRAPH RECOVERY Instead of learning causal structures de novo, we may have partial information about the ground-truth SCM and may only need to fill in missing information (§4.2). An example is protein structure discovery in biology, where some causal relationships have been definitely established and others remain open hypotheses. This is an easier task compared to full graph recovery, since the model only has to search for missing edges. Table 4: Partial Graph Recovery on Alarm (Beinlich et al., 1989) and Barley (Kristensen & Rasmussen, 2002). The model is asked to predict 50 edges in Barley and 40 edges in Alarm. The accuracy is measured in Structural Hamming Distance (SHD). SDI achieved over 90% accuracy on both graphs. Graph Alarm Barley Number of variables 37 48 Total Edges 46 84 Edges to recover 40 50 Recovered Edges 37 45 Errors (in SHD) 3 5 We evaluate the proposed method on Barley (Kristensen & Rasmussen, 2002) (M = 48) and Alarm (Beinlich et al., 1989) (M = 37) from the BnLearn repository. The model is asked to predict 50 edges from Barley and 40 edges from Alarm. The model reached ≥ 90% accuracy on both datasets, as shown in Table 4. 5.8 ABLATION AND ANALYSIS As shown in Figure 12, larger graphs (such as M > 6) and denser graphs (such as full8) are progressively more difficult to learn. For denser graphs, the learned models have higher sample complexity, higher variance and slightly worse results. Refer to Appendix §A.9 for complete results on all graphs. Hyperparameters. Hyperparameters for all experiments were kept identical unless otherwise stated. We study the effect of DAG and sparsity penalties in the following paragraph. For more details, please refer to Appendix §A.5 . Importance of regularization. Valid configurations C for a causal model are expected to be a) sparse and b) acyclic. To promote such solutions, we use DAG and sparsity regularization with tunable hyperparameters. We set the DAG penalty to 0.5 and sparsity penalty to 0.1. We run ablation studies on different values of the regularizers and study their effect. We find that smaller graphs are less sensitive to different values of regularizer than larger graphs. For details, refer to Appendix §A.12. Importance of dropout. To train functional parameter for an observational distribution, sampling adjacency matrices is required. We "drop out" each edge (with a probability of σ(γ)) in our experiments during functional parameter training of the conditional distributions of the SCM. Please refer to Appendix §A.14 for a more detailed analysis. 6 CONCLUSION In this work, we introduced an experimentally successful method (SDI) for causal structure discovery using continuous optimization, combining information from both observational and interventional data. We show in experiments that it can recover true causal structure, that it generalizes well to unseen interventions, that it compares very well against start-of-the-art causal discovery methods on real world datasets, and that it scales even better on problems where only part of the graph is known. Appendix Table of Contents A Annexes 13 A.1 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.4 Model setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.5 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.6 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.7 BnLearn data repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.8 Comparisons to other methods . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.9 Sparsity of Ground-Truth Graph . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.10 Predicting interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.11 Sample complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.12 Effect of regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.13 Near-Optimum Performance of Gradient Estimator . . . . . . . . . . . . . . . . 20 A.14 Importance of dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A ANNEXES A.1 TRAINING ALGORITHM Algorithm 1 shows the pseudocode of the method described in §4. Typical values for the loop trip counts are found in §A.11. A.2 PRELIMINARIES Interventions. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph intervention data is needed (Eberhardt et al., 2012). Several types of common interventions may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth causal model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft interventions for several reasons: First, they include hard interventions as a limiting case and hence are more general. Second, in many real-world scenarios, it is more difficult to perform a hard intervention compared to a soft one. We also deal with a special case of uncertain interventions, where the variable selected for intervention is random and unknown. We call these unidentified or unknown interventions. Intervention setup. For our experiments, the groundtruth models of the synthetic datasets are modeled by neural networks as described in section A.6. Each neural network models the relationship of the causal parents and a variable. We perform our intervention by first randomly selecting which variable to intervene on, then soft-intervening on it. The selected variable is sampled from a uniform distribution. The soft intervention is a reinitialization of its neural network’s parameters. Causal sufficiency. The inability to distinguish which causal graph, within a Markov equivalence class, is the correct one in the purely-observational setting is called the identifiability problem. In our setting, all variables are observed (there are no latent confounders) and all interventions are random and independent. Hence, within our setting, if the interventions are known, then the true causal Algorithm 1 Training Algorithm 1: procedure TRAINING(SCM Ground-Truth Entailed Distribution D, with M nodes and N categories) 2: Let i an index from 0 to M − 1 3: for I iterations, or until convergence, do 4: if I % reinitialization_period == 0 then 5: D ← reinitialize(D) 6: for F functional parameter training steps do . Phase 1 7: X ∼ D 8: C ∼ Ber(σ(γ)) 9: L = − logP (X|C ; θ) 10: θt+1 ← Adam(θt,∇θL) 11: for Q interventions do . Phase 2 12: I_N← randint(0, M − 1) . Uniform selection of target 13: Dint :=D with intervention on node I_N . Apply intervention 14: if predicting intervention then . Phase 2 Prediction 15: Li ← 0 ∀i 16: for NP prediction steps do 17: X ∼ Dint 18: for CP configurations do 19: C ∼ Ber(σ(γ)) 20: Li ← Li − logPi(X|Ci; θslow) ∀i 21: I_N← argmax(Li) 22: gammagrads, logregrets = [], [] . Phase 2 Scoring 23: for NS scoring steps do 24: X ∼ Dint 25: gammagrad, logregret = 0, 0 26: for CS configurations do 27: C ∼ Ber(σ(γ)) 28: Li = − logPi(X|Ci; θslow) ∀i 29: gammagrad += σ(γ)− C . Collect σ(γ)− C for Equation 2 30: logregret += ∑ i6=I_N Li . Collect LC(k),i (X) for Equation 2 31: gammagrads.append(gammagrad) 32: logregrets.append(logregret) . Phase 3 33: gij = ∑ k(σ(γij)− c (k) ij )LC (k) ,i (X)∑ k LC (k) ,i (X) . Gradient Estimator, Equation 2 34: g ← g +∇γ (λsparse Lsparse(γ) + λDAG LDAG(γ)) . Regularizers 35: γt+1 ← Adam(γt, g) graph is always identifiable in principle (Eberhardt et al., 2012; Heinze-Deml et al., 2018a). We also consider here situations where a single variable is randomly selected and intervened upon with a soft or imprecise intervention, its identity is unknown and must be inferred. In this case, there is no theoretical guarantee that the causal graph is identifiable. However, there is existing work Peters et al. (2016) that handles this scenario and the proposed method is also proven to work empirically. Faithfulness. It is possible for causally-related variables to be probabilistically independent purely by happenstance, such as when causal effects along multiple paths cancel out. This is called unfaithfulness. We assume that faithfulness holds, since the γ gradient estimate is extracted from shifts in probability distributions. However, because of the “soft” nature of our interventions and their infinite variety, it would be exceedingly unlikely for cancellation-related unfaithfulness to persist throughout the causal-learning procedure. A.3 EXPERIMENTAL SETUP For all datasets, the weight parameters for the learned model is initialized randomly. In order to not bias the structural parameters, all σ(γ) are initialized to 0.5 in the beginning of training. Details of hyperparameters of the learner model are described in Section A.5. The experimental setup for the groundtruth model for the synthetic data can be found in Section A.6 and the details for the real world data are described in Section A.7. A.4 MODEL SETUP As discussed in section 4, we model the M variables in the graph using M independent MLPs, each possesses an input layer of M × N neurons (for M one-hot vectors of length N each), a single hidden layer chosen arbitrarily to have max(4M, 4N) neurons with a LeakyReLU activation of slope 0.1, and a linear output layer of N neurons representing the unnormalized log-probabilities of each category (a softmax then recovers the conditional probabilities from these logits). To force fi to rely exclusively on the direct ancestor set pa(i, C) under adjacency matrix C (See Eqn. 2), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . The functional parameters of the MLP are the set θ = {W0ihjn,B0ih,W1inh,B1in}.An example of the multi-MLP architecture with M=3 categorical variables of N=2 categories is shown in Figure 3. A.5 HYPERPARAMETERS Learner model. All experiments on the synthetic graphs of size 3-8 use the same hyperparameters. Both the functional and structural parameters are optimized using the Adam optimizer Kingma & Ba (2014). We use a learning rate of 5e− 2 with alpha of 0.9 for the functional parameters, and we use a learning rate of 5e− 3 with alpha of 0.1 for the structural parameters. We perform 5 runs of each experiment with random seeds 1− 5 and error bars are plotted for various graphs from size 3 to 8 in Figure 4. We use a batch size of 256. The L1 norm regularizer is set to 0.1 and the DAG regularizer is set to 0.5 for all experiments. For each γ update step, we sample 25 structural configurations from the current γ. In all experiments, we use 100 batches from the interventional distribution to predict the intervened node. A.6 SYNTHETIC DATA Synthetic datasets. The synthetic datasets in the paper are modeled by neural networks. All neural networks are 2 layered feed forward neural networks (MLPs) with Leaky ReLU activations between layers. The parameters of the neural network are initialized orthogonally within the range of (−2.5, 2.5). This range was selected such that they output a non-trivial distribution. The biases are initialized uniformly between (−1.1, 1.1). SCM with n variables are modeled by n feedforward neural networks (MLPs) as described in §5.1. We assume an acyclic causal graph so that we may easily sample from them. Hence, given any pair of random variables A and B, either A −→ B, B −→ A or A and B are independent. The MLP representing the ground-truth SCM has its weights θ initialized use orthogonal initialization with gain 2.5 and the biases are initialized using a uniform initialization between−1.1 and 1.1, which was empirically found to yield "interesting" yet learnable random SCMs. We study a variety of SCMs with different ground-truth edge structures γ. Our selection of synthetic graphs explores various extremes in the space of DAGs, stress-testing SDI. The chain graphs are the sparsest connected graphs possible, and are relatively easy to learn. The bidiag graphs are extensions of chain where there are 2-hops as well as single hops between nodes, doubling the number of edges and creating a meshed chain of forks and colliders. The jungle graphs are binary-tree-like graphs, but with each node connected directly to its grandparent in the tree as well. Half the nodes in a jungle graph are leaves, and the out-degree is up to 6. The collider graphs deliberately collide independent M − 1 ancestors into the last node; They stress maximum in-degree. Lastly, the full graphs are the maximally dense DAGs. All nodes are direct parents of all nodes below them in the topological order. The maximum in- and out-degree are both M − 1. These graphs are depicted in Figure 6. A.6.1 SYNTHETIC DATA RESULTS The model can recover correctly all synthetic graphs with 10 variables or less, as shown in Figure 10 and Table 1. For graphs larger than 10 variables, the model found it more challenging to recover the denser graphs (e.g. fullM), as shown in Table 1. Plots of the training curves showing average cross entropy (CE) and Area-Under-Curve(AUC/AUCROC) for edge probabilities of the learned graph against the ground-truth graph for synthetic SCMs with 3-13 variables are available in Figure 10. A.7 BNLEARN DATA REPOSITORY The repo contains many datasets with various sizes and structures modeling different variables. We evaluate the proposed method on 3 of the datasets in the repo, namely the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010) and Asia (Lauritzen & Spiegelhalter, 1988) datasets. The ground-truth model structure for the Cancer (Korb & Nicholson, 2010) and Earthquake (Korb & Nicholson, 2010) datasets are shown in Figure 7. Note that even though the structure for the two datasets seems to be the same, the conditional probability tables (CPTs) for these datasets are very different and hence results in different structured causal models (SCMs) for each. A.8 COMPARISONS TO OTHER METHODS As described in section 5.4, we compare to 5 other methods. The full comparison between SDIs and other methods on various graphs can be found in Table 1. One of these methods, DAG-GNN Yu et al. (2019), outputs 3 graphs based on different criteria: best mean square error (MSE), best negative loglikelihood (NLL) and best evidence lower bound (ELBO). We report performance of all outputs of DAG-GNN Yu et al. (2019) in Table 6, and the best one is selected for Table 1. A.9 SPARSITY OF GROUND-TRUTH GRAPH We evaluated the performance of SDI on graphs of various size and sparsity to better understand the performance of the model. We evaluated the proposed model on 4 representative types of graphs in increasing order of density. They are the chain, jungle, bidiag and full graphs. As shown in the results in figure 12, for graphs of size 5 or smaller, there is almost no difference in the final results in terms of variance and sample complexity. However, as the graphs gets larger (than 6), the denser graphs (full graphs) gets progressively more difficult to learn compared to the sparser graphs (chain, jungle and bidiag). The models learned for denser graphs have higher complexity, higher variance and slightly worse results. A.10 PREDICTING INTERVENTIONS In Phase 2, we score graph configurations based on how well they fit the interventional data. We find that it is necessary to avoid disturbing the learned parameters of intervened variables, and to ignore its contribution to the total negative log-likelihood of the sample. Intuitively, this is because, having been intervened upon, that variable should be taken as a given. It should especially not be interpreted as a poorly-learned variable requiring a tuning of its functional parameters, because those functional parameters were not responsible for the value of that variable; The extrinsic intervention was. Since an intervened variable is likely to be unusually poorly predicted, we heuristically determine that the most poorly predicted variable is the intervention variable. We then zero out its contribution to the log-likelihood of the sample and block gradient into its functional parameters. Figure 11 illustrates the necessity of this process. When using the prediction heuristic, the training curve closely tracks training with ground-truth knowledge of the identity of the intervention. If no prediction is made, or a random prediction is made, training proceeds much more slowly, or fails entirely. A.11 SAMPLE COMPLEXITY Our method is heavily reliant on sampling of configurations and data in Phases 1 and 2. We present here the breakdown of the sample complexity. Let • I be the number of iterations of the method, (typical: 500-2000) • B the number of samples per batch, (typical: 256) • F the number of functional parameter training iterations in Phase 1, (typical: 10000) • Q the number of interventions performed in Phase 2, (typical: 100) • NP the number of data batches for prediction, (typical: 100) • CP the number of graph configurations drawn per prediction data batch, (typical: 10) • NS the number of data batches for scoring, (typical: 10) • CS the number of graph configurations drawn per scoring data batch. (typical: 20-30) Then the total number of interventions performed, and configurations and samples drawn, over an entire run are: Interventions = IQ = γ updates (3) Samples = I( F︸︷︷︸ Phase 1 +Q(NP +NS)︸ ︷︷ ︸ Phase 2 )B (4) Configurations = I( F︸︷︷︸ Phase 1 +Q(CPNP + CSNS)︸ ︷︷ ︸ Phase 2 ) (5) Because of the multiplicative effect of these factors, the number of data samples required can quickly spiral out of control. For typical values, as many as 500 × 10000 × 256 = 1.28e9 observational and 500 × 100 × (100 + 10) × 256 = 1.408e9 interventional samples are required. To alleviate this problem slightly, we limit the number of samples generated for each intervention; This limit is usually 500-2000. A.12 EFFECT OF REGULARIZATION Importance of sparsity regularizer. We use a L1 regularizer on the structure parameters γ to encourage a sparse representation of edges in the causal graph. In order to better understand the effect of the L1 regularizer, we conducted ablation studies on the L1 regularizer. It seems that the regularizer has an small effect on rate of converges and that the model converges faster with the regularizer, This is shown in Figure 13. However, this does not seem to affect the final value the model converges to, as is shown in Table 7. Importance of DAG regularizer. We use an acyclic regularizer to discourage length-2 cycles in the learned model. We found that for small models (≤ 5 variables), the acyclic regularizer helps with faster convergence, without improving significantly the final cross-entropy. This is illustrated for the 3-variable graphs in Figure 14. However, for graphs larger than 5 variables, the acyclic regularizer starts playing an important role in encouraging the model to learn the correct structure. This is shown in the ablation study in Table 7. A.13 NEAR-OPTIMUM PERFORMANCE OF GRADIENT ESTIMATOR The gradient estimator gij we use to minimize the empirical risk w.r.t. the structural parameters γ, defined in Eq. 2 is adapted from Bengio et al. (2019). We verify that the estimator samples the correct gradient by an experiment that tests convergence near the optimum. To do this, we pre-initialize the structural and functional parameters near the global minimum, and verify that γ converges. Specifically, the ground-truth functional parameters θ are copied and disturbed by a small Gaussian noise, while the ground-truth structural parameters γ are copied, but the confidences in an edge or non-edge are set to 88% and 12% rather than 100% and 0%. The experiment is then expected to quickly converge to the global minimum. As shown in Figure 16, the gradient estimator correctly enables Stochastic Gradient Descent towards the minimum, for the chain and jungle graphs of size 15, 20 and 25. The average cross-entropy rapidly approaches its floor of 0.01, a consequence of our clamping of all γij to the range ±5 (equivalently, clamping σ(γij) to the range [0.0067, 0.9933]). A.14 IMPORTANCE OF DROPOUT To train the functional parameters on an observational distribution, one would need sampling adjacency matrices. One may be tempted to make these “complete directed graph” (all-ones except for a zero diagonal), to give the MLP maximum freedom to learn any potential causal relations itself. We demonstrate that functional parameter training cannot be carried out this way, and that it is necessary to “drop out” each edge (with probability of the current γ value in our experiments) during pretraining of the conditional distributions of the SCM. We attempt to recover the previously-recoverable graphs chain3, fork3 and confounder3 without dropout, but fail to do so, as shown in Figure 15. Figure 17: Cross-entropy for edge probability between learned and ground-truth SCM for Cancer at varying temperatures. Figure 18: Cross-entropy for edge probability between learned and ground-truth SCM. Left: The Earthquake dataset with 6 variables. Right: The Asia dataset with 8 variables
1. What is the focus of the paper regarding causal graph learning? 2. What are the strengths and weaknesses of the proposed algorithm compared to other methods like JCI? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are some questions or concerns regarding the paper that the reviewer has?
Review
Review The authors propose a 3-phase heuristic algorithm to learn a causal graph from interventional data using continuous optimization. Unfortunately, the paper is hard to follow. Specifically, the exact procedure should be clarified by the authors. If I understand correctly, first they fit to observational data by searching over the space of graphs using a smooth representation for the adjacency matrices. To fit to the interventional data, first, the interventional target is estimated by a heuristic approach and the contribution of these variables to the likelihood is ignored since they are set by the experiment. (there are random graph sampling stages in between that are not clear to me, please elaborate on this). This interventional scoring is done for all interventional data and is turned into a single gradient update. The paper is hard to parse. My main concern is that, unlike the existing work which the authors compare with in the experiments, the proposed method is not a systematic approach and accordingly it is hard to reason about its use even though it performs well in the experiments. Especially given that some choices made in the algorithm design are not properly justified. Indeed, even with interventions, we do not expect to recover the full structure but only a subset of the edges correct. Comparisons with the other methods should be expanded into a section where these methods are detailed to showcase the methodological differences. The following are my detailed feedback. "A natural application of Bayesian networks is to describe cause-effect relationships between variables." Please distinguish Bayesian networks from the causal networks. Former do not carry causal meaning. A good reference to cite in addition to Peters et al. for SCMs is Pearl's 2009 Causality book. "Although there is no theoretical guarantee that the true causal graph can be identified in that setting, evidence so far points to that still being the case." Please modify this statement as it sounds too vague. The list of contributions require knowledge of the latter sections. Please make it self contained if possible. "can't"->"can not" SCM definition is not only structural equations but also talks about interventional distributions. Please see Pearl 2009. The last line in page 2 overlaps with the page number. Some recent related work is missing: Mooij et al. "Joint Causal Inference from Multiple Contexts" JMLR'20. Kocaoglu et al. "Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions" NeurIPS'19. Brouillard et al. "Differentiable Causal Discovery from Interventional Data" arXiv'20. Mooij et al. is cited but please add it in Section 3 among constraint-based inverventional learning frameworks as well. Brouillard et al. is too recent, hence its omittance is understandable. However, it attacks the same problem considered here. I believe including it as independent discovery would help connect literature together nicely. I am not going to take this work into consideration in my evaluation since it is uploaded on arXiv only very recently. "the methods only uses"->"the methods only use" citing Murphy "This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment." Murphy can handle unknown interventions as well. Moreover Mooij et al. handles unknown interventions too. "The set of functional parameters θi parametrizes the conditional probability distribution of Xi given its parent set Xpa(i,C), with C ∼ Ber(σ(γ)) a hypothesized configuration of the SCM’s DAG." Can you clarify this sentence? "During Phase 1, the functional parameters θ are trained to maximize the likelihood of randomly drawn observational data under graphs randomly drawn from our current beliefs about the edge structure." Why do you draw synthetic data? Likelihood is typically maximized using real data at hand. It's hard to follow the exact procedure here. Intervention targets are predicted using a heuristic. Why not use the existing methods? I believe the computational aspect is seen as a problem but JCI by Mooij et al. should be fast enough. Can you convert Section 3 into a pseudo-code for the algorithm description? I believe many details are skipped and some key points of the approach is not clear by the brief text in each subsection. "should be taken as givens"->"should be taken as given" In the experiments, please compare with Mooij et al. Their method should be as fast as FCI and it would be interesting to see how the results compare. ==== After the Response by the Authors ==== Thank you for the detailed reply. For the clarifications the authors made to the algorithm description, I will increase my score. The authors state "If the scientific community had waited for deep learning to prove that it could discover the true conditional distribution of outputs given inputs, we would not have had the progress we achieved in the last two decades in AI. We believe that it is important to take into consideration all sources of evidence about the usefulness of a method, and experimental evidence is at the heart of the success of the scientific method and should not be discarded because of an established cultural habit of relying on proofs of identifiability." Note that the objections I (R1), and I believe also R3 and R4, have are not about theoretical vs. experimental research and that the paper lacks proofs or identifiability results. It is perfectly fine to not have a theoretical understanding of a proposed algorithm. But the authors should be able to justify the choices they made in the algorithm design, and especially in light of the prior work. The main justification given by the authors both in the paper and in their rebuttal is that the algorithm performs well. I believe the paper needs an iteration to address these issues. The following is my detailed feedback in addition to my original review in light of the authors' response. I hope this will help the authors in improving their paper. On fully learning the causal graph: I suggest the authors examine and try to identify, in small graphs, what aspect of their method allows it to perform better than the existing methods such as JCI or allows it to go beyond the existing equivalence classes. Without such justification, I do not think the paper in its current form will influence future research. Remark on interventions having variety: This is not sufficient for exact recovery. Imagine intervening on the same node with different mechanisms over and over. This does not allow recovery outside of the local structure around the intervened node for most causal graphs. This also relates to the remark above. Full identifiability is always related to having variety in the intervention targets and not just in interventional mechanisms. This is why some of the datasets where the exact graph is recovered by the algorithm need a detailed investigation. About synthetic experiments: One explanation for full structure recovery in the synthetic experiments could be the following: The authors randomly pick one target variable to intervene on. My guess is that this randomness in the experiment design is sufficient to have diverse enough target sets for the equivalence class to shrink to a single graph. Can you verify/check this? How many interventions do you use in the synthetic experiments? How many samples are collected per intervention? Unless I am missing something, these are not provided until page 19 but then it is not clear if these numbers are kept identical throughout the experiments. x-axis is set to be # of episodes or # of steps in most experiments whereas # of samples would be more informative. About JCI comparison: I did not completely understand why the authors could not run JCI in synthetic data. They say it is due to its complexity. But JCI's complexity comes from the graph degree and not from the number of samples for a small enough state space. It would be very interesting to compare what JCI learns relative to the proposed method in these synthetic experiments. This should test my hypothesis above that the random intervention target is providing enough diversity to reduce the equivalence class to one graph, which should be detected by JCI. Inferring a Markov equivalence class from the adjacency matrix by early stopping is definitely an interesting idea and I would encourage the authors to further pursue and formalize this direction. Sample complexity: The authors mention that their method is "sample-hungry". Given that the method presents significant divergence from the standard literature on causal inference that relies on conditional independence tests, which are known to require many samples, it is especially important to clearly present the number of samples used by the method. The main paper does not present the number of samples used in the synthetic experiments. These should be made explicit. Finally, the title and abstract still state "dependency structure discovery" and learning "Bayesian networks" whereas the authors attempt to learn causal graphs from interventions. I suggest an update to the narrative to clarify the objective of the paper.
ICLR
Title Dependency Structure Discovery from Interventions Abstract Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides much richer information about the underlying data-generating process. However, the extension and application of methods designed for observational data to include interventions is not straightforward and remains an open problem. In this paper we provide a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data. The proposed method is applicable even in the challenging and realistic case that the identity of the intervened upon variable is unknown. We examine the proposed method in the setting of graph recovery both de novo and from a partially-known edge set. We establish strong benchmark results on several structure learning tasks, including structure recovery of both synthetic graphs as well as standard graphs from the Bayesian Network Repository. 1 INTRODUCTION Structure learning concerns itself with the recovery of the graph structure of Bayesian networks (BNs) from data samples. A natural application of Bayesian networks is to describe cause-effect relationships between variables. In that context, one may speak of causal structure learning. Causal structure learning is challenging because purely observational data may be satisfactorily explained by multiple Bayesian networks (a Markov equivalence class), but only one is the most robust to distributional shifts: The one with the correct graph. A more powerful tool than BNs is thus needed to model causal relationships. Structural Causal Models (SCMs) are that tool. An SCM over a set of random variables is a collection of assignments to these variables and a directed acyclic graph of dependencies between them (Peters et al., 2017, §6.2). Each assignment is a function of only the direct causes of a variable, plus an independent noise source. An SCM entails precisely one (observational) data distribution. Interventions on an SCM’s assignments, such as setting a random variable to a fixed value (a hard intervention), entail new interventional data distributions (Peters et al., 2017, §6.3). SCMs can be used to answer higher-order questions of cause-and-effect, up the ladder of causation (Pearl & Mackenzie, 2018). Causal structure learning using SCMs has been attempted in several disciplines including biology (Sachs et al., 2005; Hill et al., 2016), weather forecasting (Abramson et al., 1996) and medicine (Lauritzen & Spiegelhalter, 1988; Korb & Nicholson, 2010). Causal structure is most frequently learned from data drawn from observational distributions. Structure learning methods generally cannot do more than identify the causal graph up to a Markov equivalence class (Spirtes et al., 2000). In order to fully identify the true causal graph, a method must either make restrictive assumptions about the underlying data-generating process, such as linear but non-Gaussian data (Shimizu et al., 2006), or must access enough data from outside the observational distribution (i.e., from interventions). Under certain assumptions about the number, diversity, and nature of the interventions, the true underlying causal graph is always identifiable, given that the method knows the intervention performed (Heckerman et al., 1995). In much of the prior work on causal model induction it is assumed that there is an experimenter and this experimenter performs interventions. However, in the real world, interventions can also be performed by other agents, which could lead to unknown interventions (interventions with unknown target variables). A few works have attempted to learn structures from unknown-intervention data (Eaton & Murphy, 2007a; Squires et al., 2020; Huang et al., 2020). A notable such work, (Mooij et al., 2016), has been extended in (Kocaoglu et al., 2019; Jaber et al., 2020). Although there is no theoretical guarantee that the true causal graph can be identified in that setting, evidence so far points to that still being the case. Another common setting is when the graph structure is partially provided, but must be completed. An example is protein structure learning in biology, where we may have definitive knowledge of some causal edges in the protein-protein interactome, but the remaining causal edges must be discovered. We will call this setting “partial graph completion”. This is an easier task compared to learning the entire graph, since it limits the number of edges that have to be learned. Recently, a flurry of work on structure learning using continuous optimization methods has appeared (Zheng et al., 2018; Yu et al., 2019). These methods operate on observational data and are competitive with other methods. Because of the theoretical limitations on identification from purely observational data cited above, it would be interesting to extend these methods to interventional data. However, it is not straightforward to apply continuous optimization methods to structure learning from interventional data. Our key contributions are to answer the following questions experimentally: 1. Can the proposed model recover true causal structure? Yes, see Figure §4. 2. How does the proposed model compare against state of the art causal methods on real-world datasets? Favourably; see §5.4 and Table §1. 3. Does a proposed model generalize well to unseen interventions? Yes, see §5.5. 4. How does the proposed model perform on partial graph recovery? It scales to∼ 50 variables while the other baselines can’t. see §5.7. 2 PRELIMINARIES Causal modeling. A Structural Causal Model (SCM) (Peters et al., 2017) over a finite number M of random variables Xi is a set of structural assignments Xi := fi(Xpa(i,C), Ni) , ∀i ∈ {0, . . . ,M − 1} (1) Identifiability. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph structure, interventional data is needed (Eberhardt et al., 2012). Interventions. There are several types of common interventions which may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft intervention because they include hard intervention as a limiting case and hence are more general. Structure discovery using continuous optimization. Structure discovery is a super-exponential search problem that searches though all possible directed acyclic graphs (DAGs). Previous continuousoptimization structure learning works (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) mitigate the problem of searching in the super-exponential set of graph structures by considering the degree to which a hypothesis graph violates “DAG-ness” as an additional penalty to be optimized. If there are M such variables, the strategy of considering all the possible structural graphs as separate hypotheses is not feasible because it would require maintaining O(2M 2 ) models of the data.2 3 RELATED WORK The recovery of the underlying structural causal graph from observational and interventional data is a fundamental problem (Pearl, 1995; 2009; Spirtes et al., 2000). Different approaches have been studied: score-based, constraint-based, asymmetry-based and continuous optimization methods. Score-based methods search through the space of all possible directed acyclic graphs (DAGs) representing the causal structure based on some form of scoring function for network structures (Heckerman et al., 1995; Chickering, 2002; Tsamardinos et al., 2006; Hauser & Bühlmann, 2012; Goudet et al., 2017; Cooper & Yoo, 1999; Zhu & Chen, 2019). Constraint-based methods (Spirtes et al., 2000; Sun et al., 2007; Zhang et al., 2012; Monti et al., 2019; Zhu & Chen, 2019) infer the DAG by analyzing conditional independences in the data. Eaton & Murphy (2007c) use dynamic programming techniques to accelerate Markov Chain Monte Carlo (MCMC) sampling in a Bayesian approach to structure learning for discrete variable DAGs. Peters et al. (2016); Ghassami et al. (2017); Rojas-Carulla et al. (2018) exploit invariance across environments to infer causal structure, which faces difficulty scaling due to the iteration over the super-exponential set of possible graphs. Recently, (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) framed the structure search as a continuous optimization problem, however, the methods only uses observational data and are non-trivial to extend to interventional data. In our paper, we present a method that uses continuous optimization methods that works on both observational and interventional data. For interventional data, it is often assumed that the models have access to full intervention information, which is rare in the real world. Rothenhäusler et al. (2015) have investigated the case of additive shift interventions, while Eaton & Murphy (2007b) have examined the situation where the targets of experimental interventions are imperfect or uncertain. This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment. Learning based methods have been proposed (Guyon, 2013; 2014; Lopez-Paz et al., 2015) and there also exist recent approaches using the generalization ability of neural networks to learn causal signals from purely observational data (Kalainathan et al., 2018; Goudet et al., 2018). Neural network methods equipped with learned masks, such as (Ivanov et al., 2018; Li et al., 2019; Yoon et al., 2018; Douglas et al., 2017), exist in the literature, but only a few (Kalainathan et al., 2018) have been adapted to causal inference. This last work is, however, tailored for causal inference on continuous variables and from observations only. Adapting it to a discrete-variable setting is made difficult by its use of a Generative Adversarial Network (GAN) Goodfellow et al. (2014) framework. 4 STRUCTURE DISCOVERY FROM INTERVENTIONS METHOD Scope of Applicability and Objective. The proposed method, like any structure learning algorithm, assumes the availability of a data-generating process based on ancestral sampling of a ground-truth SCM of M variables, which can be queried for samples. The SCM supports applying and retracting known or unknown interventions. The method can support infinite- or finite-data as well as infiniteor finite-intervention regimes. The objective is, then, to learn the SCM’s structure from the insights that each intervention gives about cause-effect relationships between variables in the SCM. 4.1 PROBLEM SETTING AND ASSUMPTIONS In this paper, we restrict the problem setting to specific, but still broad classes of SCMs and interventions. In particular, we assume that: Data is discrete-valued. The SCM’s random variables are all categorical. Causal sufficiency. For every data sample, the value of all variables are available; There are no latent confounders. Interventions are localized. They affect only a single variable (but which one may not be known). Interventions are soft. An intervention does not necessarily pin its target random variable to a fixed value (though it may, as a special case). It changes the relationship of a variable with its parents. Interventions do not stack. Before a new intervention is made, the previous one is fully retracted. This stops the SCM from wandering away from its initial, observational configuration after a long series of interventions. No control over interventions. The structure learning algorithm has control neither of the target, nor the nature of the next intervention on the SCM. For a detailed description of the interventions, refer to §A.2. 4.2 VARIATIONS AND PRIOR KNOWLEDGE In the problem setting above, the ground-truth SCM is completely opaque to us. However, we consider two interesting relaxations of this formulation: Complete or partial graph recovery. We may already know the existence of certain cause-effect edges and non-edges within the ground-truth SCM. If such prior information is available, it turns a complete graph recovery problem into one of partial graph recovery. Larger SCMs can be tackled if only parts of the graph need to be recovered. Known or unknown interventions: The interventions can either be known or unknown to the learned model. We demonstrate that the proposed method can naturally incorporate this prior information to improve its performance. 4.3 METHOD OVERVIEW The proposed method is a score-based, iterative, continuousoptimization method consisting of three phases that flow into one other (See Figure 2). During the three-phase procedure, a structural representation of a DAG and a functional representation of a set of independent causal mechanisms are trained jointly until convergence. Because the structural and functional parameters are not independent and do influence each other, we train them in alternating phases, a form of block coordinate descent optimization. 4.3.1 PARAMETRIZATION We distinguish two sets of parameters: The structural parameters γ and the functional parameters θ. Given a graph of M variables, we parametrize the structure γ as a matrix RM×M such that σ(γij) is our belief in random variable Xj being a direct cause of Xi, where σ(x) = 1/(1 + exp(−x)) is the sigmoid function. The matrix σ(γ) is thus a soft adjacency matrix. The set of functional parameters θi parametrizes the conditional probability distribution of Xi given its parent set Xpa(i,C), with C ∼ Ber(σ(γ)) a hypothesized configuration of the SCM’s DAG. 4.3.2 PHASE 1: GRAPH FITTING ON OBSERVATIONAL DATA During Phase 1, the functional parameters θ are trained to maximize the likelihood of randomly drawn observational data under graphs randomly drawn from our current beliefs about the edge structure. We draw graph configurations Cij ∼ Ber(σ(γij)) and batches of observational data from the unintervened ground-truth SCM, then maximize the log-likelihood of the batch under that configuration using SGD. The use of graph configurations sampling from Bernoulli distributions is analogous to dropout on the inputs of the functional models (in our implementation, MLPs), giving us an ensemble of neural networks that can model the observational data. 4.3.3 PHASE 2: GRAPH SCORING ON INTERVENTIONAL DATA During Phase 2, a number of graph configurations are sampled from the current edge beliefs parametrized by γ, and scored on data samples drawn from the intervention SCM. Intervention applied: At the beginning of Phase 2, an intervention is applied to the ground-truth SCM. This intervention is not under the control of the method. In our implementation, and unbeknownst to the model, the target variable is chosen uniformly randomly from all M variables throughout the optimization process. Intervention predicted: If the target of the intervention is not known, it is predicted using a simple heuristic. A small number of interventional data samples are drawn from the SCM and more graphs are sampled from our current edge beliefs. The average log-likelihood of each individual variable Xi across the samples is then computed using the functional model parameters θ fine-tuned on observational data in Phase 1. The variable Xi showing the greatest deterioration in log-likelihood is assumed to be the target because the observational distribution most poorly predicts that variable. If the target of the intervention is known, then this is taken as ground-truth knowledge for the purpose of subsequent steps, and no prediction needs to be done. Graphs Sampled and Scored: A new set of interventional data samples and graph configurations are now drawn from the intervention SCM and edge beliefs respectively. The log-likelihood of the data batches under the hypothesized configurations is computed, with one modification: The contribution to the total log-likelihood of a sample X coming from the target (or predicted-target) intervention variable Xi is masked. Because Xi was intervened upon (in the manner of a Pearl do-operation, soft or hard), the values one gets for that variable should be taken as givens, not as contributors to the total log-likelihood of the sample. As well, no gradient should be allowed to propagate into the variable’s learned functional parametrization θi, because it was not actually responsible for the outcome. Intervention retracted: After Phase 2, the intervention is retracted, per our modelling assumptions. 4.3.4 PHASE 3: CREDIT ASSIGNMENT TO STRUCTURAL PARAMETERS During Phase 3, the scores of the interventional data batches over various graph configurations are aggregated into a gradient for the structural parameters γ. Because a discrete Bernoulli random sampling process was used to sample graph configurations under which the log-likelihoods were computed, we require a gradient estimator to propagate gradient through to the γ structural parameters. Several alternatives exist, but we adopt for this purpose the REINFORCE-like gradient estimator gij proposed by Bengio et al. (2019): gij = ∑ k(σ(γij)− c (k) ij )LC(k),i (X)∑ k LC(k),i (X) , ∀i, j ∈ {0, . . . ,M−1} (2) where the (k) superscript indicates the values obtained for the k-th draw of C under the current edge beliefs parametrized by γ. Therefore, L(k)C,i(X) can be read as the log-likelihood of variable Xi in the data sample X under the k’th configuration, C(k), drawn from our edge beliefs. Using the estimated gradient, we then update γ with SGD, and return to Phase 1 of the continuous optimization process. The gradient estimator gij minimizes an implicit empirical risk objective with respect to γij . When the functional and structural parameters θ and γ are “sufficiently close” to their minima, the estimator gij empirically converges quickly towards that minimum γ∗ as shown in Figure 16 of Appendix A.13. Acyclic Constraint: We include a regularization term JDAG(γ) that penalizes length-2 cycles in the learned adjacency matrix σ(γ), with a tunable strength λDAG. The regularization term is JDAG(γ) =∑ i 6=j cosh(σ(γij)σ(γji)), ∀i, j ∈ {0, . . . ,M−1} and is derived from Zheng et al. (2018). The details of the derivation are in the Appendix. We explore several different values of λDAG and their effects in our experimental setup. Suppression of longer-length cycles was not found to be worthwhile for the increased computational expense. 5 EXPERIMENTAL SETUP AND RESULTS We first evaluate the proposed method on a synthetic dataset where we have control over the number of variables and causal edges in the ground-truth SCM. This allows us to analyze the performance of the proposed method under various conditions. We then evaluate the proposed method on real-world datasets from the BnLearn dataset repository. We also consider the two variations of §4.2: Recovering only part of the graph (when the rest is known), and exploiting knowledge of the intervention target. The summary of our findings is: 1) We show strong results for graph recovery for all synthetic graphs in comparisons with other baselines, measured by Hamming distance. 2) The proposed method achieves high accuracy on partial graph recovery for large, real-world graphs. 3) The proposed method’s intervention target prediction heuristic closes the gap between the known- and unknowntarget intervention scenarios. 4) The proposed method generalizes well to unseen interventions. 5) The proposed method’s time-to-solution scaling appears to be driven by the number of edges in the groundtruth graph moreso than the number of variables. 5.1 MODEL DESCRIPTION Learner model. Without loss of generality, we let θi = {W0i,B0i,W1i,B1i} define a stack of M one-hidden-layer MLPs, one for each random variable Xi. A more appropriate model, such as a CNN, can be chosen using domainspecific knowledge; the primary advantage of using MLPs is that the hypothesized DAG configurations cij can be readily used to mask the inputs of MLP i, as shown in Figure 3. To force the structural equation fi corresponding to Xi to rely exclusively on its direct ancestor set pa(i, C) under hypothesis adjacency matrix C (See Eqn. 1), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . An example of the multi-MLP architecture with M=4 categorical variables of N=3 categories is shown in Figure 3. For more details, refer to Appendix A.4. Ground-truth model. Ground-truth SCM models are parametrized either as CPTs with parameters from BnLearn (in the case of real-world graphs), or as a second stack of MLPs similar to the learner model, with randomly-initialized functional parameters θGT and the desired adjacency matrix γGT. Interventions. In all experiments, at most one (soft) intervention is concurrently performed. To simulate a soft intervention on variable Xi, we reinitialize its ground-truth conditional distribution’s MLP parameters or CPT table randomly, while leaving the other variables untouched. For more details about the interventions, please refer to Appendix A.2. 5.2 SYNTHETIC DATASETS EXPERIMENTS We first evaluate the model’s performance on several randomlyinitialized SCMs with specific, representative graph structures. Since the number of possible DAGs grows super-exponentially with the number of variables, for M=4 up to 13 a selection of representative and edge-case DAGs are chosen. chainM and fullM (M=3-13) are the minimallyand maximally-connected M -variable DAGs, while treeM and jungleM are tree-like intermediate graphs. colliderM is the (M−1)→ 1 collider graph. The details of the setup is in Appendix A.6. Results. The model can recover most synthetic DAGs with high accuracy, as measured by Structural Hamming Distance (SHD) between learned and ground-truth DAGs. Table 1 shows our proposed method outperforming all other baseline methods, and learns all graphs perfectly for 3 to 13 variables (excepting full). For DAGs ranging from 3 to 8 variables, the AUROCs all eventually reach 1.0 (indicating perfect classification into edge/non-edge; Refer to Figure 4). For both large (M > 10) and dense DAGs (e.g. full13) the model begins encountering difficulties, as shown in Table 1 and Appendix §A.6.1. Small graphs (M < 10) are less sensitive than larger ones to our hyperparameters, notably the sparsity and acyclic regularization (§4.3.4) terms. In §A.5, we perform an analysis of these hyperparameters. 5.3 REAL-WORLD DATASETS: BNLEARN The Bayesian Network Repository is a collection of commonly-used causal Bayesian networks from the literature, suitable for Bayesian and causal learning benchmarks. We evaluate the proposed method on the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010), Asia (Lauritzen & Spiegelhalter, 1988) and Sachs (Sachs et al., 2005) datasets (M =5, 5, 8 and 11-variables respectively, maximum in-degree 3) in the BnLearn dataset repository. Results. As shown in Table 1, the proposed method perfectly recovers the DAG of Asia, while making a small number of errors (SHD=6) for Sachs (11-variables). It thus significantly outperforms all other baselines models. Figures 8 & 9 visualize what the model has learned at several stages of learning. Results for Cancer and Asia can be found in the appendices, Figure 17 and 18. 5.4 COMPARISONS WITH OTHER METHODS As shown in Table 1, we compared the proposed SDI method to ICP ((Peters et al., 2016)), non-linear ICP ((Heinze-Deml et al., 2018b)), and (Eaton & Murphy, 2007b; Zheng et al., 2018; Yu et al., 2019) on Asia (Lauritzen & Spiegelhalter, 1988), Sachs (Sachs et al., 2005) and representative synthetic graphs. Eaton & Murphy (2007b) handles uncertain interventions and Peters et al. (2016), Heinze-Deml et al. (2018b) handles unknown interventions. However, neither attempts to predict the intervention. As shown in Table 1, we significantly outperform ICP, non-linear ICP, and the methods in (Yu et al., 2019) and (Zheng et al., 2018). Furthermore, Eaton & Murphy (2007b) runs out of memory for graphs larger than M = 10 because modelling of uncertain interventions is done using “shadow” random variables (as suggested by the authors), and thus recovering the DAG internally requires solving a d = 2M -variable problem. Their method’s extremely poor time- and space-scaling of O(d2d) makes it unusable beyond d > 20. For SDIs, we threshold our edge beliefs at σ(γ) = 0.5 to derive a graph, but the continued decrease of the cross-entropy loss (Figure 4) hints at SDI’s convergence onto the correct causal model. Please refer to Appendix §A.8 for full details and results. 5.5 GENERALIZATION TO PREVIOUSLY UNSEEN INTERVENTIONS It is often argued that machine learning approaches based purely on capturing joint distributions do not necessarily yield models that generalize to unseen experiments, since they do not explicitly model changes through interventions. By way of contrast, causal models use the concept of interventions to explicitly model changing environments and thus hold the promise of robustness under distributional shifts (Pearl, 2009; Schölkopf et al., 2012; Peters et al., 2017). To test the robustness of causal modelling to previously unseen interventions (new values for an intervened variable), we evaluate a well-trained causal model against a variant, non-causal model trained with cij = 1, i 6= j. An intervention is performed on the ground-truth SCM, fresh interventional data is drawn from it, and the models, with knowledge of the intervention target, are asked to predict the other variables given their parents. The average log-likelihoods of the data under both models are computed and contrasted. The intervention variable’s contribution to the loglikelihood is masked. For all 3-variable graphs (chain3, fork3, collider3, confounder3), the causal model attributes higher log-likelihood to the intervention distribution’s samples than the non-causal variant, thereby demonstrating causal models’ superior generalization ability in transfer tasks. Table 2 collects these results. 5.6 VARIANT: PREDICTING INTERVENTIONS In Phase 2 (§4.3.3), we use a simple heuristic to predict the intervention target variable. Experiments show that this heuristic functions well in practice, yielding correct predictions far more often than by chance alone (Table 3). Guessing the intervention variable randomly, or not guessing it at all, leads to a significant drop in the model performance, even for 3-variable graphs (Fig. 11 Left). Training SDI with intervention prediction closely tracks training with leaked knowledge of the ground-truth intervention on larger, 7-variable graphs (Fig. 11 Right). 5.7 VARIANT: PARTIAL GRAPH RECOVERY Instead of learning causal structures de novo, we may have partial information about the ground-truth SCM and may only need to fill in missing information (§4.2). An example is protein structure discovery in biology, where some causal relationships have been definitely established and others remain open hypotheses. This is an easier task compared to full graph recovery, since the model only has to search for missing edges. Table 4: Partial Graph Recovery on Alarm (Beinlich et al., 1989) and Barley (Kristensen & Rasmussen, 2002). The model is asked to predict 50 edges in Barley and 40 edges in Alarm. The accuracy is measured in Structural Hamming Distance (SHD). SDI achieved over 90% accuracy on both graphs. Graph Alarm Barley Number of variables 37 48 Total Edges 46 84 Edges to recover 40 50 Recovered Edges 37 45 Errors (in SHD) 3 5 We evaluate the proposed method on Barley (Kristensen & Rasmussen, 2002) (M = 48) and Alarm (Beinlich et al., 1989) (M = 37) from the BnLearn repository. The model is asked to predict 50 edges from Barley and 40 edges from Alarm. The model reached ≥ 90% accuracy on both datasets, as shown in Table 4. 5.8 ABLATION AND ANALYSIS As shown in Figure 12, larger graphs (such as M > 6) and denser graphs (such as full8) are progressively more difficult to learn. For denser graphs, the learned models have higher sample complexity, higher variance and slightly worse results. Refer to Appendix §A.9 for complete results on all graphs. Hyperparameters. Hyperparameters for all experiments were kept identical unless otherwise stated. We study the effect of DAG and sparsity penalties in the following paragraph. For more details, please refer to Appendix §A.5 . Importance of regularization. Valid configurations C for a causal model are expected to be a) sparse and b) acyclic. To promote such solutions, we use DAG and sparsity regularization with tunable hyperparameters. We set the DAG penalty to 0.5 and sparsity penalty to 0.1. We run ablation studies on different values of the regularizers and study their effect. We find that smaller graphs are less sensitive to different values of regularizer than larger graphs. For details, refer to Appendix §A.12. Importance of dropout. To train functional parameter for an observational distribution, sampling adjacency matrices is required. We "drop out" each edge (with a probability of σ(γ)) in our experiments during functional parameter training of the conditional distributions of the SCM. Please refer to Appendix §A.14 for a more detailed analysis. 6 CONCLUSION In this work, we introduced an experimentally successful method (SDI) for causal structure discovery using continuous optimization, combining information from both observational and interventional data. We show in experiments that it can recover true causal structure, that it generalizes well to unseen interventions, that it compares very well against start-of-the-art causal discovery methods on real world datasets, and that it scales even better on problems where only part of the graph is known. Appendix Table of Contents A Annexes 13 A.1 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.4 Model setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.5 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.6 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.7 BnLearn data repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.8 Comparisons to other methods . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.9 Sparsity of Ground-Truth Graph . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.10 Predicting interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.11 Sample complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.12 Effect of regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.13 Near-Optimum Performance of Gradient Estimator . . . . . . . . . . . . . . . . 20 A.14 Importance of dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A ANNEXES A.1 TRAINING ALGORITHM Algorithm 1 shows the pseudocode of the method described in §4. Typical values for the loop trip counts are found in §A.11. A.2 PRELIMINARIES Interventions. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph intervention data is needed (Eberhardt et al., 2012). Several types of common interventions may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth causal model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft interventions for several reasons: First, they include hard interventions as a limiting case and hence are more general. Second, in many real-world scenarios, it is more difficult to perform a hard intervention compared to a soft one. We also deal with a special case of uncertain interventions, where the variable selected for intervention is random and unknown. We call these unidentified or unknown interventions. Intervention setup. For our experiments, the groundtruth models of the synthetic datasets are modeled by neural networks as described in section A.6. Each neural network models the relationship of the causal parents and a variable. We perform our intervention by first randomly selecting which variable to intervene on, then soft-intervening on it. The selected variable is sampled from a uniform distribution. The soft intervention is a reinitialization of its neural network’s parameters. Causal sufficiency. The inability to distinguish which causal graph, within a Markov equivalence class, is the correct one in the purely-observational setting is called the identifiability problem. In our setting, all variables are observed (there are no latent confounders) and all interventions are random and independent. Hence, within our setting, if the interventions are known, then the true causal Algorithm 1 Training Algorithm 1: procedure TRAINING(SCM Ground-Truth Entailed Distribution D, with M nodes and N categories) 2: Let i an index from 0 to M − 1 3: for I iterations, or until convergence, do 4: if I % reinitialization_period == 0 then 5: D ← reinitialize(D) 6: for F functional parameter training steps do . Phase 1 7: X ∼ D 8: C ∼ Ber(σ(γ)) 9: L = − logP (X|C ; θ) 10: θt+1 ← Adam(θt,∇θL) 11: for Q interventions do . Phase 2 12: I_N← randint(0, M − 1) . Uniform selection of target 13: Dint :=D with intervention on node I_N . Apply intervention 14: if predicting intervention then . Phase 2 Prediction 15: Li ← 0 ∀i 16: for NP prediction steps do 17: X ∼ Dint 18: for CP configurations do 19: C ∼ Ber(σ(γ)) 20: Li ← Li − logPi(X|Ci; θslow) ∀i 21: I_N← argmax(Li) 22: gammagrads, logregrets = [], [] . Phase 2 Scoring 23: for NS scoring steps do 24: X ∼ Dint 25: gammagrad, logregret = 0, 0 26: for CS configurations do 27: C ∼ Ber(σ(γ)) 28: Li = − logPi(X|Ci; θslow) ∀i 29: gammagrad += σ(γ)− C . Collect σ(γ)− C for Equation 2 30: logregret += ∑ i6=I_N Li . Collect LC(k),i (X) for Equation 2 31: gammagrads.append(gammagrad) 32: logregrets.append(logregret) . Phase 3 33: gij = ∑ k(σ(γij)− c (k) ij )LC (k) ,i (X)∑ k LC (k) ,i (X) . Gradient Estimator, Equation 2 34: g ← g +∇γ (λsparse Lsparse(γ) + λDAG LDAG(γ)) . Regularizers 35: γt+1 ← Adam(γt, g) graph is always identifiable in principle (Eberhardt et al., 2012; Heinze-Deml et al., 2018a). We also consider here situations where a single variable is randomly selected and intervened upon with a soft or imprecise intervention, its identity is unknown and must be inferred. In this case, there is no theoretical guarantee that the causal graph is identifiable. However, there is existing work Peters et al. (2016) that handles this scenario and the proposed method is also proven to work empirically. Faithfulness. It is possible for causally-related variables to be probabilistically independent purely by happenstance, such as when causal effects along multiple paths cancel out. This is called unfaithfulness. We assume that faithfulness holds, since the γ gradient estimate is extracted from shifts in probability distributions. However, because of the “soft” nature of our interventions and their infinite variety, it would be exceedingly unlikely for cancellation-related unfaithfulness to persist throughout the causal-learning procedure. A.3 EXPERIMENTAL SETUP For all datasets, the weight parameters for the learned model is initialized randomly. In order to not bias the structural parameters, all σ(γ) are initialized to 0.5 in the beginning of training. Details of hyperparameters of the learner model are described in Section A.5. The experimental setup for the groundtruth model for the synthetic data can be found in Section A.6 and the details for the real world data are described in Section A.7. A.4 MODEL SETUP As discussed in section 4, we model the M variables in the graph using M independent MLPs, each possesses an input layer of M × N neurons (for M one-hot vectors of length N each), a single hidden layer chosen arbitrarily to have max(4M, 4N) neurons with a LeakyReLU activation of slope 0.1, and a linear output layer of N neurons representing the unnormalized log-probabilities of each category (a softmax then recovers the conditional probabilities from these logits). To force fi to rely exclusively on the direct ancestor set pa(i, C) under adjacency matrix C (See Eqn. 2), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . The functional parameters of the MLP are the set θ = {W0ihjn,B0ih,W1inh,B1in}.An example of the multi-MLP architecture with M=3 categorical variables of N=2 categories is shown in Figure 3. A.5 HYPERPARAMETERS Learner model. All experiments on the synthetic graphs of size 3-8 use the same hyperparameters. Both the functional and structural parameters are optimized using the Adam optimizer Kingma & Ba (2014). We use a learning rate of 5e− 2 with alpha of 0.9 for the functional parameters, and we use a learning rate of 5e− 3 with alpha of 0.1 for the structural parameters. We perform 5 runs of each experiment with random seeds 1− 5 and error bars are plotted for various graphs from size 3 to 8 in Figure 4. We use a batch size of 256. The L1 norm regularizer is set to 0.1 and the DAG regularizer is set to 0.5 for all experiments. For each γ update step, we sample 25 structural configurations from the current γ. In all experiments, we use 100 batches from the interventional distribution to predict the intervened node. A.6 SYNTHETIC DATA Synthetic datasets. The synthetic datasets in the paper are modeled by neural networks. All neural networks are 2 layered feed forward neural networks (MLPs) with Leaky ReLU activations between layers. The parameters of the neural network are initialized orthogonally within the range of (−2.5, 2.5). This range was selected such that they output a non-trivial distribution. The biases are initialized uniformly between (−1.1, 1.1). SCM with n variables are modeled by n feedforward neural networks (MLPs) as described in §5.1. We assume an acyclic causal graph so that we may easily sample from them. Hence, given any pair of random variables A and B, either A −→ B, B −→ A or A and B are independent. The MLP representing the ground-truth SCM has its weights θ initialized use orthogonal initialization with gain 2.5 and the biases are initialized using a uniform initialization between−1.1 and 1.1, which was empirically found to yield "interesting" yet learnable random SCMs. We study a variety of SCMs with different ground-truth edge structures γ. Our selection of synthetic graphs explores various extremes in the space of DAGs, stress-testing SDI. The chain graphs are the sparsest connected graphs possible, and are relatively easy to learn. The bidiag graphs are extensions of chain where there are 2-hops as well as single hops between nodes, doubling the number of edges and creating a meshed chain of forks and colliders. The jungle graphs are binary-tree-like graphs, but with each node connected directly to its grandparent in the tree as well. Half the nodes in a jungle graph are leaves, and the out-degree is up to 6. The collider graphs deliberately collide independent M − 1 ancestors into the last node; They stress maximum in-degree. Lastly, the full graphs are the maximally dense DAGs. All nodes are direct parents of all nodes below them in the topological order. The maximum in- and out-degree are both M − 1. These graphs are depicted in Figure 6. A.6.1 SYNTHETIC DATA RESULTS The model can recover correctly all synthetic graphs with 10 variables or less, as shown in Figure 10 and Table 1. For graphs larger than 10 variables, the model found it more challenging to recover the denser graphs (e.g. fullM), as shown in Table 1. Plots of the training curves showing average cross entropy (CE) and Area-Under-Curve(AUC/AUCROC) for edge probabilities of the learned graph against the ground-truth graph for synthetic SCMs with 3-13 variables are available in Figure 10. A.7 BNLEARN DATA REPOSITORY The repo contains many datasets with various sizes and structures modeling different variables. We evaluate the proposed method on 3 of the datasets in the repo, namely the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010) and Asia (Lauritzen & Spiegelhalter, 1988) datasets. The ground-truth model structure for the Cancer (Korb & Nicholson, 2010) and Earthquake (Korb & Nicholson, 2010) datasets are shown in Figure 7. Note that even though the structure for the two datasets seems to be the same, the conditional probability tables (CPTs) for these datasets are very different and hence results in different structured causal models (SCMs) for each. A.8 COMPARISONS TO OTHER METHODS As described in section 5.4, we compare to 5 other methods. The full comparison between SDIs and other methods on various graphs can be found in Table 1. One of these methods, DAG-GNN Yu et al. (2019), outputs 3 graphs based on different criteria: best mean square error (MSE), best negative loglikelihood (NLL) and best evidence lower bound (ELBO). We report performance of all outputs of DAG-GNN Yu et al. (2019) in Table 6, and the best one is selected for Table 1. A.9 SPARSITY OF GROUND-TRUTH GRAPH We evaluated the performance of SDI on graphs of various size and sparsity to better understand the performance of the model. We evaluated the proposed model on 4 representative types of graphs in increasing order of density. They are the chain, jungle, bidiag and full graphs. As shown in the results in figure 12, for graphs of size 5 or smaller, there is almost no difference in the final results in terms of variance and sample complexity. However, as the graphs gets larger (than 6), the denser graphs (full graphs) gets progressively more difficult to learn compared to the sparser graphs (chain, jungle and bidiag). The models learned for denser graphs have higher complexity, higher variance and slightly worse results. A.10 PREDICTING INTERVENTIONS In Phase 2, we score graph configurations based on how well they fit the interventional data. We find that it is necessary to avoid disturbing the learned parameters of intervened variables, and to ignore its contribution to the total negative log-likelihood of the sample. Intuitively, this is because, having been intervened upon, that variable should be taken as a given. It should especially not be interpreted as a poorly-learned variable requiring a tuning of its functional parameters, because those functional parameters were not responsible for the value of that variable; The extrinsic intervention was. Since an intervened variable is likely to be unusually poorly predicted, we heuristically determine that the most poorly predicted variable is the intervention variable. We then zero out its contribution to the log-likelihood of the sample and block gradient into its functional parameters. Figure 11 illustrates the necessity of this process. When using the prediction heuristic, the training curve closely tracks training with ground-truth knowledge of the identity of the intervention. If no prediction is made, or a random prediction is made, training proceeds much more slowly, or fails entirely. A.11 SAMPLE COMPLEXITY Our method is heavily reliant on sampling of configurations and data in Phases 1 and 2. We present here the breakdown of the sample complexity. Let • I be the number of iterations of the method, (typical: 500-2000) • B the number of samples per batch, (typical: 256) • F the number of functional parameter training iterations in Phase 1, (typical: 10000) • Q the number of interventions performed in Phase 2, (typical: 100) • NP the number of data batches for prediction, (typical: 100) • CP the number of graph configurations drawn per prediction data batch, (typical: 10) • NS the number of data batches for scoring, (typical: 10) • CS the number of graph configurations drawn per scoring data batch. (typical: 20-30) Then the total number of interventions performed, and configurations and samples drawn, over an entire run are: Interventions = IQ = γ updates (3) Samples = I( F︸︷︷︸ Phase 1 +Q(NP +NS)︸ ︷︷ ︸ Phase 2 )B (4) Configurations = I( F︸︷︷︸ Phase 1 +Q(CPNP + CSNS)︸ ︷︷ ︸ Phase 2 ) (5) Because of the multiplicative effect of these factors, the number of data samples required can quickly spiral out of control. For typical values, as many as 500 × 10000 × 256 = 1.28e9 observational and 500 × 100 × (100 + 10) × 256 = 1.408e9 interventional samples are required. To alleviate this problem slightly, we limit the number of samples generated for each intervention; This limit is usually 500-2000. A.12 EFFECT OF REGULARIZATION Importance of sparsity regularizer. We use a L1 regularizer on the structure parameters γ to encourage a sparse representation of edges in the causal graph. In order to better understand the effect of the L1 regularizer, we conducted ablation studies on the L1 regularizer. It seems that the regularizer has an small effect on rate of converges and that the model converges faster with the regularizer, This is shown in Figure 13. However, this does not seem to affect the final value the model converges to, as is shown in Table 7. Importance of DAG regularizer. We use an acyclic regularizer to discourage length-2 cycles in the learned model. We found that for small models (≤ 5 variables), the acyclic regularizer helps with faster convergence, without improving significantly the final cross-entropy. This is illustrated for the 3-variable graphs in Figure 14. However, for graphs larger than 5 variables, the acyclic regularizer starts playing an important role in encouraging the model to learn the correct structure. This is shown in the ablation study in Table 7. A.13 NEAR-OPTIMUM PERFORMANCE OF GRADIENT ESTIMATOR The gradient estimator gij we use to minimize the empirical risk w.r.t. the structural parameters γ, defined in Eq. 2 is adapted from Bengio et al. (2019). We verify that the estimator samples the correct gradient by an experiment that tests convergence near the optimum. To do this, we pre-initialize the structural and functional parameters near the global minimum, and verify that γ converges. Specifically, the ground-truth functional parameters θ are copied and disturbed by a small Gaussian noise, while the ground-truth structural parameters γ are copied, but the confidences in an edge or non-edge are set to 88% and 12% rather than 100% and 0%. The experiment is then expected to quickly converge to the global minimum. As shown in Figure 16, the gradient estimator correctly enables Stochastic Gradient Descent towards the minimum, for the chain and jungle graphs of size 15, 20 and 25. The average cross-entropy rapidly approaches its floor of 0.01, a consequence of our clamping of all γij to the range ±5 (equivalently, clamping σ(γij) to the range [0.0067, 0.9933]). A.14 IMPORTANCE OF DROPOUT To train the functional parameters on an observational distribution, one would need sampling adjacency matrices. One may be tempted to make these “complete directed graph” (all-ones except for a zero diagonal), to give the MLP maximum freedom to learn any potential causal relations itself. We demonstrate that functional parameter training cannot be carried out this way, and that it is necessary to “drop out” each edge (with probability of the current γ value in our experiments) during pretraining of the conditional distributions of the SCM. We attempt to recover the previously-recoverable graphs chain3, fork3 and confounder3 without dropout, but fail to do so, as shown in Figure 15. Figure 17: Cross-entropy for edge probability between learned and ground-truth SCM for Cancer at varying temperatures. Figure 18: Cross-entropy for edge probability between learned and ground-truth SCM. Left: The Earthquake dataset with 6 variables. Right: The Asia dataset with 8 variables
1. What is the focus and contribution of the paper on causal discovery? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its empirical performance and lack of theoretical guarantees? 3. Do you have any concerns regarding the heuristic used for predicting an unknown intervention target? 4. How clear is the description of the proposed method, and how can it be improved? 5. Are there any comparisons or discussions of related works, such as the paper mentioned in the review (https://arxiv.org/pdf/2007.01754.pdf)?
Review
Review This paper aims to extend the continuous optimization approach to causal discovery to handle interventional data as well as observational data. It describes a method for learning the causal structure over a set of categorical variables and reports strong empirical performance. However, no theoretical guarantee or analysis is provided, which is a significant weakness in my view. It also makes no comment on or comparison to a paper that has essentially the same goal, https://arxiv.org/pdf/2007.01754.pdf. The latter paper seems to me more principled and convincing. The heuristic for predicting an unknown intervention target looks very dubious to me. I would appreciate some explanation of why the target should be expected to have the biggest drop of log-likelihood. The description of the proposed method could be clearer; for example, it helps to provide an explicit formulation of the SGD used in the method.
ICLR
Title Dependency Structure Discovery from Interventions Abstract Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides much richer information about the underlying data-generating process. However, the extension and application of methods designed for observational data to include interventions is not straightforward and remains an open problem. In this paper we provide a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data. The proposed method is applicable even in the challenging and realistic case that the identity of the intervened upon variable is unknown. We examine the proposed method in the setting of graph recovery both de novo and from a partially-known edge set. We establish strong benchmark results on several structure learning tasks, including structure recovery of both synthetic graphs as well as standard graphs from the Bayesian Network Repository. 1 INTRODUCTION Structure learning concerns itself with the recovery of the graph structure of Bayesian networks (BNs) from data samples. A natural application of Bayesian networks is to describe cause-effect relationships between variables. In that context, one may speak of causal structure learning. Causal structure learning is challenging because purely observational data may be satisfactorily explained by multiple Bayesian networks (a Markov equivalence class), but only one is the most robust to distributional shifts: The one with the correct graph. A more powerful tool than BNs is thus needed to model causal relationships. Structural Causal Models (SCMs) are that tool. An SCM over a set of random variables is a collection of assignments to these variables and a directed acyclic graph of dependencies between them (Peters et al., 2017, §6.2). Each assignment is a function of only the direct causes of a variable, plus an independent noise source. An SCM entails precisely one (observational) data distribution. Interventions on an SCM’s assignments, such as setting a random variable to a fixed value (a hard intervention), entail new interventional data distributions (Peters et al., 2017, §6.3). SCMs can be used to answer higher-order questions of cause-and-effect, up the ladder of causation (Pearl & Mackenzie, 2018). Causal structure learning using SCMs has been attempted in several disciplines including biology (Sachs et al., 2005; Hill et al., 2016), weather forecasting (Abramson et al., 1996) and medicine (Lauritzen & Spiegelhalter, 1988; Korb & Nicholson, 2010). Causal structure is most frequently learned from data drawn from observational distributions. Structure learning methods generally cannot do more than identify the causal graph up to a Markov equivalence class (Spirtes et al., 2000). In order to fully identify the true causal graph, a method must either make restrictive assumptions about the underlying data-generating process, such as linear but non-Gaussian data (Shimizu et al., 2006), or must access enough data from outside the observational distribution (i.e., from interventions). Under certain assumptions about the number, diversity, and nature of the interventions, the true underlying causal graph is always identifiable, given that the method knows the intervention performed (Heckerman et al., 1995). In much of the prior work on causal model induction it is assumed that there is an experimenter and this experimenter performs interventions. However, in the real world, interventions can also be performed by other agents, which could lead to unknown interventions (interventions with unknown target variables). A few works have attempted to learn structures from unknown-intervention data (Eaton & Murphy, 2007a; Squires et al., 2020; Huang et al., 2020). A notable such work, (Mooij et al., 2016), has been extended in (Kocaoglu et al., 2019; Jaber et al., 2020). Although there is no theoretical guarantee that the true causal graph can be identified in that setting, evidence so far points to that still being the case. Another common setting is when the graph structure is partially provided, but must be completed. An example is protein structure learning in biology, where we may have definitive knowledge of some causal edges in the protein-protein interactome, but the remaining causal edges must be discovered. We will call this setting “partial graph completion”. This is an easier task compared to learning the entire graph, since it limits the number of edges that have to be learned. Recently, a flurry of work on structure learning using continuous optimization methods has appeared (Zheng et al., 2018; Yu et al., 2019). These methods operate on observational data and are competitive with other methods. Because of the theoretical limitations on identification from purely observational data cited above, it would be interesting to extend these methods to interventional data. However, it is not straightforward to apply continuous optimization methods to structure learning from interventional data. Our key contributions are to answer the following questions experimentally: 1. Can the proposed model recover true causal structure? Yes, see Figure §4. 2. How does the proposed model compare against state of the art causal methods on real-world datasets? Favourably; see §5.4 and Table §1. 3. Does a proposed model generalize well to unseen interventions? Yes, see §5.5. 4. How does the proposed model perform on partial graph recovery? It scales to∼ 50 variables while the other baselines can’t. see §5.7. 2 PRELIMINARIES Causal modeling. A Structural Causal Model (SCM) (Peters et al., 2017) over a finite number M of random variables Xi is a set of structural assignments Xi := fi(Xpa(i,C), Ni) , ∀i ∈ {0, . . . ,M − 1} (1) Identifiability. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph structure, interventional data is needed (Eberhardt et al., 2012). Interventions. There are several types of common interventions which may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft intervention because they include hard intervention as a limiting case and hence are more general. Structure discovery using continuous optimization. Structure discovery is a super-exponential search problem that searches though all possible directed acyclic graphs (DAGs). Previous continuousoptimization structure learning works (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) mitigate the problem of searching in the super-exponential set of graph structures by considering the degree to which a hypothesis graph violates “DAG-ness” as an additional penalty to be optimized. If there are M such variables, the strategy of considering all the possible structural graphs as separate hypotheses is not feasible because it would require maintaining O(2M 2 ) models of the data.2 3 RELATED WORK The recovery of the underlying structural causal graph from observational and interventional data is a fundamental problem (Pearl, 1995; 2009; Spirtes et al., 2000). Different approaches have been studied: score-based, constraint-based, asymmetry-based and continuous optimization methods. Score-based methods search through the space of all possible directed acyclic graphs (DAGs) representing the causal structure based on some form of scoring function for network structures (Heckerman et al., 1995; Chickering, 2002; Tsamardinos et al., 2006; Hauser & Bühlmann, 2012; Goudet et al., 2017; Cooper & Yoo, 1999; Zhu & Chen, 2019). Constraint-based methods (Spirtes et al., 2000; Sun et al., 2007; Zhang et al., 2012; Monti et al., 2019; Zhu & Chen, 2019) infer the DAG by analyzing conditional independences in the data. Eaton & Murphy (2007c) use dynamic programming techniques to accelerate Markov Chain Monte Carlo (MCMC) sampling in a Bayesian approach to structure learning for discrete variable DAGs. Peters et al. (2016); Ghassami et al. (2017); Rojas-Carulla et al. (2018) exploit invariance across environments to infer causal structure, which faces difficulty scaling due to the iteration over the super-exponential set of possible graphs. Recently, (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) framed the structure search as a continuous optimization problem, however, the methods only uses observational data and are non-trivial to extend to interventional data. In our paper, we present a method that uses continuous optimization methods that works on both observational and interventional data. For interventional data, it is often assumed that the models have access to full intervention information, which is rare in the real world. Rothenhäusler et al. (2015) have investigated the case of additive shift interventions, while Eaton & Murphy (2007b) have examined the situation where the targets of experimental interventions are imperfect or uncertain. This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment. Learning based methods have been proposed (Guyon, 2013; 2014; Lopez-Paz et al., 2015) and there also exist recent approaches using the generalization ability of neural networks to learn causal signals from purely observational data (Kalainathan et al., 2018; Goudet et al., 2018). Neural network methods equipped with learned masks, such as (Ivanov et al., 2018; Li et al., 2019; Yoon et al., 2018; Douglas et al., 2017), exist in the literature, but only a few (Kalainathan et al., 2018) have been adapted to causal inference. This last work is, however, tailored for causal inference on continuous variables and from observations only. Adapting it to a discrete-variable setting is made difficult by its use of a Generative Adversarial Network (GAN) Goodfellow et al. (2014) framework. 4 STRUCTURE DISCOVERY FROM INTERVENTIONS METHOD Scope of Applicability and Objective. The proposed method, like any structure learning algorithm, assumes the availability of a data-generating process based on ancestral sampling of a ground-truth SCM of M variables, which can be queried for samples. The SCM supports applying and retracting known or unknown interventions. The method can support infinite- or finite-data as well as infiniteor finite-intervention regimes. The objective is, then, to learn the SCM’s structure from the insights that each intervention gives about cause-effect relationships between variables in the SCM. 4.1 PROBLEM SETTING AND ASSUMPTIONS In this paper, we restrict the problem setting to specific, but still broad classes of SCMs and interventions. In particular, we assume that: Data is discrete-valued. The SCM’s random variables are all categorical. Causal sufficiency. For every data sample, the value of all variables are available; There are no latent confounders. Interventions are localized. They affect only a single variable (but which one may not be known). Interventions are soft. An intervention does not necessarily pin its target random variable to a fixed value (though it may, as a special case). It changes the relationship of a variable with its parents. Interventions do not stack. Before a new intervention is made, the previous one is fully retracted. This stops the SCM from wandering away from its initial, observational configuration after a long series of interventions. No control over interventions. The structure learning algorithm has control neither of the target, nor the nature of the next intervention on the SCM. For a detailed description of the interventions, refer to §A.2. 4.2 VARIATIONS AND PRIOR KNOWLEDGE In the problem setting above, the ground-truth SCM is completely opaque to us. However, we consider two interesting relaxations of this formulation: Complete or partial graph recovery. We may already know the existence of certain cause-effect edges and non-edges within the ground-truth SCM. If such prior information is available, it turns a complete graph recovery problem into one of partial graph recovery. Larger SCMs can be tackled if only parts of the graph need to be recovered. Known or unknown interventions: The interventions can either be known or unknown to the learned model. We demonstrate that the proposed method can naturally incorporate this prior information to improve its performance. 4.3 METHOD OVERVIEW The proposed method is a score-based, iterative, continuousoptimization method consisting of three phases that flow into one other (See Figure 2). During the three-phase procedure, a structural representation of a DAG and a functional representation of a set of independent causal mechanisms are trained jointly until convergence. Because the structural and functional parameters are not independent and do influence each other, we train them in alternating phases, a form of block coordinate descent optimization. 4.3.1 PARAMETRIZATION We distinguish two sets of parameters: The structural parameters γ and the functional parameters θ. Given a graph of M variables, we parametrize the structure γ as a matrix RM×M such that σ(γij) is our belief in random variable Xj being a direct cause of Xi, where σ(x) = 1/(1 + exp(−x)) is the sigmoid function. The matrix σ(γ) is thus a soft adjacency matrix. The set of functional parameters θi parametrizes the conditional probability distribution of Xi given its parent set Xpa(i,C), with C ∼ Ber(σ(γ)) a hypothesized configuration of the SCM’s DAG. 4.3.2 PHASE 1: GRAPH FITTING ON OBSERVATIONAL DATA During Phase 1, the functional parameters θ are trained to maximize the likelihood of randomly drawn observational data under graphs randomly drawn from our current beliefs about the edge structure. We draw graph configurations Cij ∼ Ber(σ(γij)) and batches of observational data from the unintervened ground-truth SCM, then maximize the log-likelihood of the batch under that configuration using SGD. The use of graph configurations sampling from Bernoulli distributions is analogous to dropout on the inputs of the functional models (in our implementation, MLPs), giving us an ensemble of neural networks that can model the observational data. 4.3.3 PHASE 2: GRAPH SCORING ON INTERVENTIONAL DATA During Phase 2, a number of graph configurations are sampled from the current edge beliefs parametrized by γ, and scored on data samples drawn from the intervention SCM. Intervention applied: At the beginning of Phase 2, an intervention is applied to the ground-truth SCM. This intervention is not under the control of the method. In our implementation, and unbeknownst to the model, the target variable is chosen uniformly randomly from all M variables throughout the optimization process. Intervention predicted: If the target of the intervention is not known, it is predicted using a simple heuristic. A small number of interventional data samples are drawn from the SCM and more graphs are sampled from our current edge beliefs. The average log-likelihood of each individual variable Xi across the samples is then computed using the functional model parameters θ fine-tuned on observational data in Phase 1. The variable Xi showing the greatest deterioration in log-likelihood is assumed to be the target because the observational distribution most poorly predicts that variable. If the target of the intervention is known, then this is taken as ground-truth knowledge for the purpose of subsequent steps, and no prediction needs to be done. Graphs Sampled and Scored: A new set of interventional data samples and graph configurations are now drawn from the intervention SCM and edge beliefs respectively. The log-likelihood of the data batches under the hypothesized configurations is computed, with one modification: The contribution to the total log-likelihood of a sample X coming from the target (or predicted-target) intervention variable Xi is masked. Because Xi was intervened upon (in the manner of a Pearl do-operation, soft or hard), the values one gets for that variable should be taken as givens, not as contributors to the total log-likelihood of the sample. As well, no gradient should be allowed to propagate into the variable’s learned functional parametrization θi, because it was not actually responsible for the outcome. Intervention retracted: After Phase 2, the intervention is retracted, per our modelling assumptions. 4.3.4 PHASE 3: CREDIT ASSIGNMENT TO STRUCTURAL PARAMETERS During Phase 3, the scores of the interventional data batches over various graph configurations are aggregated into a gradient for the structural parameters γ. Because a discrete Bernoulli random sampling process was used to sample graph configurations under which the log-likelihoods were computed, we require a gradient estimator to propagate gradient through to the γ structural parameters. Several alternatives exist, but we adopt for this purpose the REINFORCE-like gradient estimator gij proposed by Bengio et al. (2019): gij = ∑ k(σ(γij)− c (k) ij )LC(k),i (X)∑ k LC(k),i (X) , ∀i, j ∈ {0, . . . ,M−1} (2) where the (k) superscript indicates the values obtained for the k-th draw of C under the current edge beliefs parametrized by γ. Therefore, L(k)C,i(X) can be read as the log-likelihood of variable Xi in the data sample X under the k’th configuration, C(k), drawn from our edge beliefs. Using the estimated gradient, we then update γ with SGD, and return to Phase 1 of the continuous optimization process. The gradient estimator gij minimizes an implicit empirical risk objective with respect to γij . When the functional and structural parameters θ and γ are “sufficiently close” to their minima, the estimator gij empirically converges quickly towards that minimum γ∗ as shown in Figure 16 of Appendix A.13. Acyclic Constraint: We include a regularization term JDAG(γ) that penalizes length-2 cycles in the learned adjacency matrix σ(γ), with a tunable strength λDAG. The regularization term is JDAG(γ) =∑ i 6=j cosh(σ(γij)σ(γji)), ∀i, j ∈ {0, . . . ,M−1} and is derived from Zheng et al. (2018). The details of the derivation are in the Appendix. We explore several different values of λDAG and their effects in our experimental setup. Suppression of longer-length cycles was not found to be worthwhile for the increased computational expense. 5 EXPERIMENTAL SETUP AND RESULTS We first evaluate the proposed method on a synthetic dataset where we have control over the number of variables and causal edges in the ground-truth SCM. This allows us to analyze the performance of the proposed method under various conditions. We then evaluate the proposed method on real-world datasets from the BnLearn dataset repository. We also consider the two variations of §4.2: Recovering only part of the graph (when the rest is known), and exploiting knowledge of the intervention target. The summary of our findings is: 1) We show strong results for graph recovery for all synthetic graphs in comparisons with other baselines, measured by Hamming distance. 2) The proposed method achieves high accuracy on partial graph recovery for large, real-world graphs. 3) The proposed method’s intervention target prediction heuristic closes the gap between the known- and unknowntarget intervention scenarios. 4) The proposed method generalizes well to unseen interventions. 5) The proposed method’s time-to-solution scaling appears to be driven by the number of edges in the groundtruth graph moreso than the number of variables. 5.1 MODEL DESCRIPTION Learner model. Without loss of generality, we let θi = {W0i,B0i,W1i,B1i} define a stack of M one-hidden-layer MLPs, one for each random variable Xi. A more appropriate model, such as a CNN, can be chosen using domainspecific knowledge; the primary advantage of using MLPs is that the hypothesized DAG configurations cij can be readily used to mask the inputs of MLP i, as shown in Figure 3. To force the structural equation fi corresponding to Xi to rely exclusively on its direct ancestor set pa(i, C) under hypothesis adjacency matrix C (See Eqn. 1), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . An example of the multi-MLP architecture with M=4 categorical variables of N=3 categories is shown in Figure 3. For more details, refer to Appendix A.4. Ground-truth model. Ground-truth SCM models are parametrized either as CPTs with parameters from BnLearn (in the case of real-world graphs), or as a second stack of MLPs similar to the learner model, with randomly-initialized functional parameters θGT and the desired adjacency matrix γGT. Interventions. In all experiments, at most one (soft) intervention is concurrently performed. To simulate a soft intervention on variable Xi, we reinitialize its ground-truth conditional distribution’s MLP parameters or CPT table randomly, while leaving the other variables untouched. For more details about the interventions, please refer to Appendix A.2. 5.2 SYNTHETIC DATASETS EXPERIMENTS We first evaluate the model’s performance on several randomlyinitialized SCMs with specific, representative graph structures. Since the number of possible DAGs grows super-exponentially with the number of variables, for M=4 up to 13 a selection of representative and edge-case DAGs are chosen. chainM and fullM (M=3-13) are the minimallyand maximally-connected M -variable DAGs, while treeM and jungleM are tree-like intermediate graphs. colliderM is the (M−1)→ 1 collider graph. The details of the setup is in Appendix A.6. Results. The model can recover most synthetic DAGs with high accuracy, as measured by Structural Hamming Distance (SHD) between learned and ground-truth DAGs. Table 1 shows our proposed method outperforming all other baseline methods, and learns all graphs perfectly for 3 to 13 variables (excepting full). For DAGs ranging from 3 to 8 variables, the AUROCs all eventually reach 1.0 (indicating perfect classification into edge/non-edge; Refer to Figure 4). For both large (M > 10) and dense DAGs (e.g. full13) the model begins encountering difficulties, as shown in Table 1 and Appendix §A.6.1. Small graphs (M < 10) are less sensitive than larger ones to our hyperparameters, notably the sparsity and acyclic regularization (§4.3.4) terms. In §A.5, we perform an analysis of these hyperparameters. 5.3 REAL-WORLD DATASETS: BNLEARN The Bayesian Network Repository is a collection of commonly-used causal Bayesian networks from the literature, suitable for Bayesian and causal learning benchmarks. We evaluate the proposed method on the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010), Asia (Lauritzen & Spiegelhalter, 1988) and Sachs (Sachs et al., 2005) datasets (M =5, 5, 8 and 11-variables respectively, maximum in-degree 3) in the BnLearn dataset repository. Results. As shown in Table 1, the proposed method perfectly recovers the DAG of Asia, while making a small number of errors (SHD=6) for Sachs (11-variables). It thus significantly outperforms all other baselines models. Figures 8 & 9 visualize what the model has learned at several stages of learning. Results for Cancer and Asia can be found in the appendices, Figure 17 and 18. 5.4 COMPARISONS WITH OTHER METHODS As shown in Table 1, we compared the proposed SDI method to ICP ((Peters et al., 2016)), non-linear ICP ((Heinze-Deml et al., 2018b)), and (Eaton & Murphy, 2007b; Zheng et al., 2018; Yu et al., 2019) on Asia (Lauritzen & Spiegelhalter, 1988), Sachs (Sachs et al., 2005) and representative synthetic graphs. Eaton & Murphy (2007b) handles uncertain interventions and Peters et al. (2016), Heinze-Deml et al. (2018b) handles unknown interventions. However, neither attempts to predict the intervention. As shown in Table 1, we significantly outperform ICP, non-linear ICP, and the methods in (Yu et al., 2019) and (Zheng et al., 2018). Furthermore, Eaton & Murphy (2007b) runs out of memory for graphs larger than M = 10 because modelling of uncertain interventions is done using “shadow” random variables (as suggested by the authors), and thus recovering the DAG internally requires solving a d = 2M -variable problem. Their method’s extremely poor time- and space-scaling of O(d2d) makes it unusable beyond d > 20. For SDIs, we threshold our edge beliefs at σ(γ) = 0.5 to derive a graph, but the continued decrease of the cross-entropy loss (Figure 4) hints at SDI’s convergence onto the correct causal model. Please refer to Appendix §A.8 for full details and results. 5.5 GENERALIZATION TO PREVIOUSLY UNSEEN INTERVENTIONS It is often argued that machine learning approaches based purely on capturing joint distributions do not necessarily yield models that generalize to unseen experiments, since they do not explicitly model changes through interventions. By way of contrast, causal models use the concept of interventions to explicitly model changing environments and thus hold the promise of robustness under distributional shifts (Pearl, 2009; Schölkopf et al., 2012; Peters et al., 2017). To test the robustness of causal modelling to previously unseen interventions (new values for an intervened variable), we evaluate a well-trained causal model against a variant, non-causal model trained with cij = 1, i 6= j. An intervention is performed on the ground-truth SCM, fresh interventional data is drawn from it, and the models, with knowledge of the intervention target, are asked to predict the other variables given their parents. The average log-likelihoods of the data under both models are computed and contrasted. The intervention variable’s contribution to the loglikelihood is masked. For all 3-variable graphs (chain3, fork3, collider3, confounder3), the causal model attributes higher log-likelihood to the intervention distribution’s samples than the non-causal variant, thereby demonstrating causal models’ superior generalization ability in transfer tasks. Table 2 collects these results. 5.6 VARIANT: PREDICTING INTERVENTIONS In Phase 2 (§4.3.3), we use a simple heuristic to predict the intervention target variable. Experiments show that this heuristic functions well in practice, yielding correct predictions far more often than by chance alone (Table 3). Guessing the intervention variable randomly, or not guessing it at all, leads to a significant drop in the model performance, even for 3-variable graphs (Fig. 11 Left). Training SDI with intervention prediction closely tracks training with leaked knowledge of the ground-truth intervention on larger, 7-variable graphs (Fig. 11 Right). 5.7 VARIANT: PARTIAL GRAPH RECOVERY Instead of learning causal structures de novo, we may have partial information about the ground-truth SCM and may only need to fill in missing information (§4.2). An example is protein structure discovery in biology, where some causal relationships have been definitely established and others remain open hypotheses. This is an easier task compared to full graph recovery, since the model only has to search for missing edges. Table 4: Partial Graph Recovery on Alarm (Beinlich et al., 1989) and Barley (Kristensen & Rasmussen, 2002). The model is asked to predict 50 edges in Barley and 40 edges in Alarm. The accuracy is measured in Structural Hamming Distance (SHD). SDI achieved over 90% accuracy on both graphs. Graph Alarm Barley Number of variables 37 48 Total Edges 46 84 Edges to recover 40 50 Recovered Edges 37 45 Errors (in SHD) 3 5 We evaluate the proposed method on Barley (Kristensen & Rasmussen, 2002) (M = 48) and Alarm (Beinlich et al., 1989) (M = 37) from the BnLearn repository. The model is asked to predict 50 edges from Barley and 40 edges from Alarm. The model reached ≥ 90% accuracy on both datasets, as shown in Table 4. 5.8 ABLATION AND ANALYSIS As shown in Figure 12, larger graphs (such as M > 6) and denser graphs (such as full8) are progressively more difficult to learn. For denser graphs, the learned models have higher sample complexity, higher variance and slightly worse results. Refer to Appendix §A.9 for complete results on all graphs. Hyperparameters. Hyperparameters for all experiments were kept identical unless otherwise stated. We study the effect of DAG and sparsity penalties in the following paragraph. For more details, please refer to Appendix §A.5 . Importance of regularization. Valid configurations C for a causal model are expected to be a) sparse and b) acyclic. To promote such solutions, we use DAG and sparsity regularization with tunable hyperparameters. We set the DAG penalty to 0.5 and sparsity penalty to 0.1. We run ablation studies on different values of the regularizers and study their effect. We find that smaller graphs are less sensitive to different values of regularizer than larger graphs. For details, refer to Appendix §A.12. Importance of dropout. To train functional parameter for an observational distribution, sampling adjacency matrices is required. We "drop out" each edge (with a probability of σ(γ)) in our experiments during functional parameter training of the conditional distributions of the SCM. Please refer to Appendix §A.14 for a more detailed analysis. 6 CONCLUSION In this work, we introduced an experimentally successful method (SDI) for causal structure discovery using continuous optimization, combining information from both observational and interventional data. We show in experiments that it can recover true causal structure, that it generalizes well to unseen interventions, that it compares very well against start-of-the-art causal discovery methods on real world datasets, and that it scales even better on problems where only part of the graph is known. Appendix Table of Contents A Annexes 13 A.1 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.4 Model setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.5 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.6 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.7 BnLearn data repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.8 Comparisons to other methods . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.9 Sparsity of Ground-Truth Graph . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.10 Predicting interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.11 Sample complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.12 Effect of regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.13 Near-Optimum Performance of Gradient Estimator . . . . . . . . . . . . . . . . 20 A.14 Importance of dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A ANNEXES A.1 TRAINING ALGORITHM Algorithm 1 shows the pseudocode of the method described in §4. Typical values for the loop trip counts are found in §A.11. A.2 PRELIMINARIES Interventions. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph intervention data is needed (Eberhardt et al., 2012). Several types of common interventions may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth causal model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft interventions for several reasons: First, they include hard interventions as a limiting case and hence are more general. Second, in many real-world scenarios, it is more difficult to perform a hard intervention compared to a soft one. We also deal with a special case of uncertain interventions, where the variable selected for intervention is random and unknown. We call these unidentified or unknown interventions. Intervention setup. For our experiments, the groundtruth models of the synthetic datasets are modeled by neural networks as described in section A.6. Each neural network models the relationship of the causal parents and a variable. We perform our intervention by first randomly selecting which variable to intervene on, then soft-intervening on it. The selected variable is sampled from a uniform distribution. The soft intervention is a reinitialization of its neural network’s parameters. Causal sufficiency. The inability to distinguish which causal graph, within a Markov equivalence class, is the correct one in the purely-observational setting is called the identifiability problem. In our setting, all variables are observed (there are no latent confounders) and all interventions are random and independent. Hence, within our setting, if the interventions are known, then the true causal Algorithm 1 Training Algorithm 1: procedure TRAINING(SCM Ground-Truth Entailed Distribution D, with M nodes and N categories) 2: Let i an index from 0 to M − 1 3: for I iterations, or until convergence, do 4: if I % reinitialization_period == 0 then 5: D ← reinitialize(D) 6: for F functional parameter training steps do . Phase 1 7: X ∼ D 8: C ∼ Ber(σ(γ)) 9: L = − logP (X|C ; θ) 10: θt+1 ← Adam(θt,∇θL) 11: for Q interventions do . Phase 2 12: I_N← randint(0, M − 1) . Uniform selection of target 13: Dint :=D with intervention on node I_N . Apply intervention 14: if predicting intervention then . Phase 2 Prediction 15: Li ← 0 ∀i 16: for NP prediction steps do 17: X ∼ Dint 18: for CP configurations do 19: C ∼ Ber(σ(γ)) 20: Li ← Li − logPi(X|Ci; θslow) ∀i 21: I_N← argmax(Li) 22: gammagrads, logregrets = [], [] . Phase 2 Scoring 23: for NS scoring steps do 24: X ∼ Dint 25: gammagrad, logregret = 0, 0 26: for CS configurations do 27: C ∼ Ber(σ(γ)) 28: Li = − logPi(X|Ci; θslow) ∀i 29: gammagrad += σ(γ)− C . Collect σ(γ)− C for Equation 2 30: logregret += ∑ i6=I_N Li . Collect LC(k),i (X) for Equation 2 31: gammagrads.append(gammagrad) 32: logregrets.append(logregret) . Phase 3 33: gij = ∑ k(σ(γij)− c (k) ij )LC (k) ,i (X)∑ k LC (k) ,i (X) . Gradient Estimator, Equation 2 34: g ← g +∇γ (λsparse Lsparse(γ) + λDAG LDAG(γ)) . Regularizers 35: γt+1 ← Adam(γt, g) graph is always identifiable in principle (Eberhardt et al., 2012; Heinze-Deml et al., 2018a). We also consider here situations where a single variable is randomly selected and intervened upon with a soft or imprecise intervention, its identity is unknown and must be inferred. In this case, there is no theoretical guarantee that the causal graph is identifiable. However, there is existing work Peters et al. (2016) that handles this scenario and the proposed method is also proven to work empirically. Faithfulness. It is possible for causally-related variables to be probabilistically independent purely by happenstance, such as when causal effects along multiple paths cancel out. This is called unfaithfulness. We assume that faithfulness holds, since the γ gradient estimate is extracted from shifts in probability distributions. However, because of the “soft” nature of our interventions and their infinite variety, it would be exceedingly unlikely for cancellation-related unfaithfulness to persist throughout the causal-learning procedure. A.3 EXPERIMENTAL SETUP For all datasets, the weight parameters for the learned model is initialized randomly. In order to not bias the structural parameters, all σ(γ) are initialized to 0.5 in the beginning of training. Details of hyperparameters of the learner model are described in Section A.5. The experimental setup for the groundtruth model for the synthetic data can be found in Section A.6 and the details for the real world data are described in Section A.7. A.4 MODEL SETUP As discussed in section 4, we model the M variables in the graph using M independent MLPs, each possesses an input layer of M × N neurons (for M one-hot vectors of length N each), a single hidden layer chosen arbitrarily to have max(4M, 4N) neurons with a LeakyReLU activation of slope 0.1, and a linear output layer of N neurons representing the unnormalized log-probabilities of each category (a softmax then recovers the conditional probabilities from these logits). To force fi to rely exclusively on the direct ancestor set pa(i, C) under adjacency matrix C (See Eqn. 2), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . The functional parameters of the MLP are the set θ = {W0ihjn,B0ih,W1inh,B1in}.An example of the multi-MLP architecture with M=3 categorical variables of N=2 categories is shown in Figure 3. A.5 HYPERPARAMETERS Learner model. All experiments on the synthetic graphs of size 3-8 use the same hyperparameters. Both the functional and structural parameters are optimized using the Adam optimizer Kingma & Ba (2014). We use a learning rate of 5e− 2 with alpha of 0.9 for the functional parameters, and we use a learning rate of 5e− 3 with alpha of 0.1 for the structural parameters. We perform 5 runs of each experiment with random seeds 1− 5 and error bars are plotted for various graphs from size 3 to 8 in Figure 4. We use a batch size of 256. The L1 norm regularizer is set to 0.1 and the DAG regularizer is set to 0.5 for all experiments. For each γ update step, we sample 25 structural configurations from the current γ. In all experiments, we use 100 batches from the interventional distribution to predict the intervened node. A.6 SYNTHETIC DATA Synthetic datasets. The synthetic datasets in the paper are modeled by neural networks. All neural networks are 2 layered feed forward neural networks (MLPs) with Leaky ReLU activations between layers. The parameters of the neural network are initialized orthogonally within the range of (−2.5, 2.5). This range was selected such that they output a non-trivial distribution. The biases are initialized uniformly between (−1.1, 1.1). SCM with n variables are modeled by n feedforward neural networks (MLPs) as described in §5.1. We assume an acyclic causal graph so that we may easily sample from them. Hence, given any pair of random variables A and B, either A −→ B, B −→ A or A and B are independent. The MLP representing the ground-truth SCM has its weights θ initialized use orthogonal initialization with gain 2.5 and the biases are initialized using a uniform initialization between−1.1 and 1.1, which was empirically found to yield "interesting" yet learnable random SCMs. We study a variety of SCMs with different ground-truth edge structures γ. Our selection of synthetic graphs explores various extremes in the space of DAGs, stress-testing SDI. The chain graphs are the sparsest connected graphs possible, and are relatively easy to learn. The bidiag graphs are extensions of chain where there are 2-hops as well as single hops between nodes, doubling the number of edges and creating a meshed chain of forks and colliders. The jungle graphs are binary-tree-like graphs, but with each node connected directly to its grandparent in the tree as well. Half the nodes in a jungle graph are leaves, and the out-degree is up to 6. The collider graphs deliberately collide independent M − 1 ancestors into the last node; They stress maximum in-degree. Lastly, the full graphs are the maximally dense DAGs. All nodes are direct parents of all nodes below them in the topological order. The maximum in- and out-degree are both M − 1. These graphs are depicted in Figure 6. A.6.1 SYNTHETIC DATA RESULTS The model can recover correctly all synthetic graphs with 10 variables or less, as shown in Figure 10 and Table 1. For graphs larger than 10 variables, the model found it more challenging to recover the denser graphs (e.g. fullM), as shown in Table 1. Plots of the training curves showing average cross entropy (CE) and Area-Under-Curve(AUC/AUCROC) for edge probabilities of the learned graph against the ground-truth graph for synthetic SCMs with 3-13 variables are available in Figure 10. A.7 BNLEARN DATA REPOSITORY The repo contains many datasets with various sizes and structures modeling different variables. We evaluate the proposed method on 3 of the datasets in the repo, namely the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010) and Asia (Lauritzen & Spiegelhalter, 1988) datasets. The ground-truth model structure for the Cancer (Korb & Nicholson, 2010) and Earthquake (Korb & Nicholson, 2010) datasets are shown in Figure 7. Note that even though the structure for the two datasets seems to be the same, the conditional probability tables (CPTs) for these datasets are very different and hence results in different structured causal models (SCMs) for each. A.8 COMPARISONS TO OTHER METHODS As described in section 5.4, we compare to 5 other methods. The full comparison between SDIs and other methods on various graphs can be found in Table 1. One of these methods, DAG-GNN Yu et al. (2019), outputs 3 graphs based on different criteria: best mean square error (MSE), best negative loglikelihood (NLL) and best evidence lower bound (ELBO). We report performance of all outputs of DAG-GNN Yu et al. (2019) in Table 6, and the best one is selected for Table 1. A.9 SPARSITY OF GROUND-TRUTH GRAPH We evaluated the performance of SDI on graphs of various size and sparsity to better understand the performance of the model. We evaluated the proposed model on 4 representative types of graphs in increasing order of density. They are the chain, jungle, bidiag and full graphs. As shown in the results in figure 12, for graphs of size 5 or smaller, there is almost no difference in the final results in terms of variance and sample complexity. However, as the graphs gets larger (than 6), the denser graphs (full graphs) gets progressively more difficult to learn compared to the sparser graphs (chain, jungle and bidiag). The models learned for denser graphs have higher complexity, higher variance and slightly worse results. A.10 PREDICTING INTERVENTIONS In Phase 2, we score graph configurations based on how well they fit the interventional data. We find that it is necessary to avoid disturbing the learned parameters of intervened variables, and to ignore its contribution to the total negative log-likelihood of the sample. Intuitively, this is because, having been intervened upon, that variable should be taken as a given. It should especially not be interpreted as a poorly-learned variable requiring a tuning of its functional parameters, because those functional parameters were not responsible for the value of that variable; The extrinsic intervention was. Since an intervened variable is likely to be unusually poorly predicted, we heuristically determine that the most poorly predicted variable is the intervention variable. We then zero out its contribution to the log-likelihood of the sample and block gradient into its functional parameters. Figure 11 illustrates the necessity of this process. When using the prediction heuristic, the training curve closely tracks training with ground-truth knowledge of the identity of the intervention. If no prediction is made, or a random prediction is made, training proceeds much more slowly, or fails entirely. A.11 SAMPLE COMPLEXITY Our method is heavily reliant on sampling of configurations and data in Phases 1 and 2. We present here the breakdown of the sample complexity. Let • I be the number of iterations of the method, (typical: 500-2000) • B the number of samples per batch, (typical: 256) • F the number of functional parameter training iterations in Phase 1, (typical: 10000) • Q the number of interventions performed in Phase 2, (typical: 100) • NP the number of data batches for prediction, (typical: 100) • CP the number of graph configurations drawn per prediction data batch, (typical: 10) • NS the number of data batches for scoring, (typical: 10) • CS the number of graph configurations drawn per scoring data batch. (typical: 20-30) Then the total number of interventions performed, and configurations and samples drawn, over an entire run are: Interventions = IQ = γ updates (3) Samples = I( F︸︷︷︸ Phase 1 +Q(NP +NS)︸ ︷︷ ︸ Phase 2 )B (4) Configurations = I( F︸︷︷︸ Phase 1 +Q(CPNP + CSNS)︸ ︷︷ ︸ Phase 2 ) (5) Because of the multiplicative effect of these factors, the number of data samples required can quickly spiral out of control. For typical values, as many as 500 × 10000 × 256 = 1.28e9 observational and 500 × 100 × (100 + 10) × 256 = 1.408e9 interventional samples are required. To alleviate this problem slightly, we limit the number of samples generated for each intervention; This limit is usually 500-2000. A.12 EFFECT OF REGULARIZATION Importance of sparsity regularizer. We use a L1 regularizer on the structure parameters γ to encourage a sparse representation of edges in the causal graph. In order to better understand the effect of the L1 regularizer, we conducted ablation studies on the L1 regularizer. It seems that the regularizer has an small effect on rate of converges and that the model converges faster with the regularizer, This is shown in Figure 13. However, this does not seem to affect the final value the model converges to, as is shown in Table 7. Importance of DAG regularizer. We use an acyclic regularizer to discourage length-2 cycles in the learned model. We found that for small models (≤ 5 variables), the acyclic regularizer helps with faster convergence, without improving significantly the final cross-entropy. This is illustrated for the 3-variable graphs in Figure 14. However, for graphs larger than 5 variables, the acyclic regularizer starts playing an important role in encouraging the model to learn the correct structure. This is shown in the ablation study in Table 7. A.13 NEAR-OPTIMUM PERFORMANCE OF GRADIENT ESTIMATOR The gradient estimator gij we use to minimize the empirical risk w.r.t. the structural parameters γ, defined in Eq. 2 is adapted from Bengio et al. (2019). We verify that the estimator samples the correct gradient by an experiment that tests convergence near the optimum. To do this, we pre-initialize the structural and functional parameters near the global minimum, and verify that γ converges. Specifically, the ground-truth functional parameters θ are copied and disturbed by a small Gaussian noise, while the ground-truth structural parameters γ are copied, but the confidences in an edge or non-edge are set to 88% and 12% rather than 100% and 0%. The experiment is then expected to quickly converge to the global minimum. As shown in Figure 16, the gradient estimator correctly enables Stochastic Gradient Descent towards the minimum, for the chain and jungle graphs of size 15, 20 and 25. The average cross-entropy rapidly approaches its floor of 0.01, a consequence of our clamping of all γij to the range ±5 (equivalently, clamping σ(γij) to the range [0.0067, 0.9933]). A.14 IMPORTANCE OF DROPOUT To train the functional parameters on an observational distribution, one would need sampling adjacency matrices. One may be tempted to make these “complete directed graph” (all-ones except for a zero diagonal), to give the MLP maximum freedom to learn any potential causal relations itself. We demonstrate that functional parameter training cannot be carried out this way, and that it is necessary to “drop out” each edge (with probability of the current γ value in our experiments) during pretraining of the conditional distributions of the SCM. We attempt to recover the previously-recoverable graphs chain3, fork3 and confounder3 without dropout, but fail to do so, as shown in Figure 15. Figure 17: Cross-entropy for edge probability between learned and ground-truth SCM for Cancer at varying temperatures. Figure 18: Cross-entropy for edge probability between learned and ground-truth SCM. Left: The Earthquake dataset with 6 variables. Right: The Asia dataset with 8 variables
1. What is the focus and contribution of the paper on structure learning for causal Bayesian networks? 2. What are the strengths of the proposed approach, particularly in its iterative method and use of the do-formalism? 3. What are the weaknesses of the paper, especially regarding the brief definition of interventions and lack of clarity on certain assumptions? 4. Do you have any concerns about the choice of definitions used in the paper, such as the concept of "infinite intervention regimes"? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Recommendation to Accept ########################################################################## Summary: The paper provides a novel approach in the area of structure learning for causal bayesian networks. The authors suggest an iterative method, that builds on the widely accepted do-formalism. The approach suggested fits the network before interventions, simulates the intervention on the fitted network and then again assigns a likelihood score to the network parameters The paper concisely describes a novel algorithm used in the notoriously difficult problem of causal structure learning. The contributions are clearly stated. The accompanying expiremental reuslts suggest a competitve perfomance, especially reagarding scaling the counts of variables I recommend to accept it, even though a few details could have been described more precisely. The definition of interventions is done extremely briefly, (sec. 2), however in my opinion the choice of definitions used here would justify some accompanying examples for clarification (this would help especially to understand what is meant with "infinite intervention regimes" (sec. 4) The assumption "no control over interventions" is not clear per se, here it would help to understand what the omittance of this assumption would imply. A clarification, why "the interventions can either be known or unknown", provides a relaxation of the formulation used (sec. 4.2) would be useful
ICLR
Title Dependency Structure Discovery from Interventions Abstract Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides much richer information about the underlying data-generating process. However, the extension and application of methods designed for observational data to include interventions is not straightforward and remains an open problem. In this paper we provide a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data. The proposed method is applicable even in the challenging and realistic case that the identity of the intervened upon variable is unknown. We examine the proposed method in the setting of graph recovery both de novo and from a partially-known edge set. We establish strong benchmark results on several structure learning tasks, including structure recovery of both synthetic graphs as well as standard graphs from the Bayesian Network Repository. 1 INTRODUCTION Structure learning concerns itself with the recovery of the graph structure of Bayesian networks (BNs) from data samples. A natural application of Bayesian networks is to describe cause-effect relationships between variables. In that context, one may speak of causal structure learning. Causal structure learning is challenging because purely observational data may be satisfactorily explained by multiple Bayesian networks (a Markov equivalence class), but only one is the most robust to distributional shifts: The one with the correct graph. A more powerful tool than BNs is thus needed to model causal relationships. Structural Causal Models (SCMs) are that tool. An SCM over a set of random variables is a collection of assignments to these variables and a directed acyclic graph of dependencies between them (Peters et al., 2017, §6.2). Each assignment is a function of only the direct causes of a variable, plus an independent noise source. An SCM entails precisely one (observational) data distribution. Interventions on an SCM’s assignments, such as setting a random variable to a fixed value (a hard intervention), entail new interventional data distributions (Peters et al., 2017, §6.3). SCMs can be used to answer higher-order questions of cause-and-effect, up the ladder of causation (Pearl & Mackenzie, 2018). Causal structure learning using SCMs has been attempted in several disciplines including biology (Sachs et al., 2005; Hill et al., 2016), weather forecasting (Abramson et al., 1996) and medicine (Lauritzen & Spiegelhalter, 1988; Korb & Nicholson, 2010). Causal structure is most frequently learned from data drawn from observational distributions. Structure learning methods generally cannot do more than identify the causal graph up to a Markov equivalence class (Spirtes et al., 2000). In order to fully identify the true causal graph, a method must either make restrictive assumptions about the underlying data-generating process, such as linear but non-Gaussian data (Shimizu et al., 2006), or must access enough data from outside the observational distribution (i.e., from interventions). Under certain assumptions about the number, diversity, and nature of the interventions, the true underlying causal graph is always identifiable, given that the method knows the intervention performed (Heckerman et al., 1995). In much of the prior work on causal model induction it is assumed that there is an experimenter and this experimenter performs interventions. However, in the real world, interventions can also be performed by other agents, which could lead to unknown interventions (interventions with unknown target variables). A few works have attempted to learn structures from unknown-intervention data (Eaton & Murphy, 2007a; Squires et al., 2020; Huang et al., 2020). A notable such work, (Mooij et al., 2016), has been extended in (Kocaoglu et al., 2019; Jaber et al., 2020). Although there is no theoretical guarantee that the true causal graph can be identified in that setting, evidence so far points to that still being the case. Another common setting is when the graph structure is partially provided, but must be completed. An example is protein structure learning in biology, where we may have definitive knowledge of some causal edges in the protein-protein interactome, but the remaining causal edges must be discovered. We will call this setting “partial graph completion”. This is an easier task compared to learning the entire graph, since it limits the number of edges that have to be learned. Recently, a flurry of work on structure learning using continuous optimization methods has appeared (Zheng et al., 2018; Yu et al., 2019). These methods operate on observational data and are competitive with other methods. Because of the theoretical limitations on identification from purely observational data cited above, it would be interesting to extend these methods to interventional data. However, it is not straightforward to apply continuous optimization methods to structure learning from interventional data. Our key contributions are to answer the following questions experimentally: 1. Can the proposed model recover true causal structure? Yes, see Figure §4. 2. How does the proposed model compare against state of the art causal methods on real-world datasets? Favourably; see §5.4 and Table §1. 3. Does a proposed model generalize well to unseen interventions? Yes, see §5.5. 4. How does the proposed model perform on partial graph recovery? It scales to∼ 50 variables while the other baselines can’t. see §5.7. 2 PRELIMINARIES Causal modeling. A Structural Causal Model (SCM) (Peters et al., 2017) over a finite number M of random variables Xi is a set of structural assignments Xi := fi(Xpa(i,C), Ni) , ∀i ∈ {0, . . . ,M − 1} (1) Identifiability. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph structure, interventional data is needed (Eberhardt et al., 2012). Interventions. There are several types of common interventions which may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft intervention because they include hard intervention as a limiting case and hence are more general. Structure discovery using continuous optimization. Structure discovery is a super-exponential search problem that searches though all possible directed acyclic graphs (DAGs). Previous continuousoptimization structure learning works (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) mitigate the problem of searching in the super-exponential set of graph structures by considering the degree to which a hypothesis graph violates “DAG-ness” as an additional penalty to be optimized. If there are M such variables, the strategy of considering all the possible structural graphs as separate hypotheses is not feasible because it would require maintaining O(2M 2 ) models of the data.2 3 RELATED WORK The recovery of the underlying structural causal graph from observational and interventional data is a fundamental problem (Pearl, 1995; 2009; Spirtes et al., 2000). Different approaches have been studied: score-based, constraint-based, asymmetry-based and continuous optimization methods. Score-based methods search through the space of all possible directed acyclic graphs (DAGs) representing the causal structure based on some form of scoring function for network structures (Heckerman et al., 1995; Chickering, 2002; Tsamardinos et al., 2006; Hauser & Bühlmann, 2012; Goudet et al., 2017; Cooper & Yoo, 1999; Zhu & Chen, 2019). Constraint-based methods (Spirtes et al., 2000; Sun et al., 2007; Zhang et al., 2012; Monti et al., 2019; Zhu & Chen, 2019) infer the DAG by analyzing conditional independences in the data. Eaton & Murphy (2007c) use dynamic programming techniques to accelerate Markov Chain Monte Carlo (MCMC) sampling in a Bayesian approach to structure learning for discrete variable DAGs. Peters et al. (2016); Ghassami et al. (2017); Rojas-Carulla et al. (2018) exploit invariance across environments to infer causal structure, which faces difficulty scaling due to the iteration over the super-exponential set of possible graphs. Recently, (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) framed the structure search as a continuous optimization problem, however, the methods only uses observational data and are non-trivial to extend to interventional data. In our paper, we present a method that uses continuous optimization methods that works on both observational and interventional data. For interventional data, it is often assumed that the models have access to full intervention information, which is rare in the real world. Rothenhäusler et al. (2015) have investigated the case of additive shift interventions, while Eaton & Murphy (2007b) have examined the situation where the targets of experimental interventions are imperfect or uncertain. This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment. Learning based methods have been proposed (Guyon, 2013; 2014; Lopez-Paz et al., 2015) and there also exist recent approaches using the generalization ability of neural networks to learn causal signals from purely observational data (Kalainathan et al., 2018; Goudet et al., 2018). Neural network methods equipped with learned masks, such as (Ivanov et al., 2018; Li et al., 2019; Yoon et al., 2018; Douglas et al., 2017), exist in the literature, but only a few (Kalainathan et al., 2018) have been adapted to causal inference. This last work is, however, tailored for causal inference on continuous variables and from observations only. Adapting it to a discrete-variable setting is made difficult by its use of a Generative Adversarial Network (GAN) Goodfellow et al. (2014) framework. 4 STRUCTURE DISCOVERY FROM INTERVENTIONS METHOD Scope of Applicability and Objective. The proposed method, like any structure learning algorithm, assumes the availability of a data-generating process based on ancestral sampling of a ground-truth SCM of M variables, which can be queried for samples. The SCM supports applying and retracting known or unknown interventions. The method can support infinite- or finite-data as well as infiniteor finite-intervention regimes. The objective is, then, to learn the SCM’s structure from the insights that each intervention gives about cause-effect relationships between variables in the SCM. 4.1 PROBLEM SETTING AND ASSUMPTIONS In this paper, we restrict the problem setting to specific, but still broad classes of SCMs and interventions. In particular, we assume that: Data is discrete-valued. The SCM’s random variables are all categorical. Causal sufficiency. For every data sample, the value of all variables are available; There are no latent confounders. Interventions are localized. They affect only a single variable (but which one may not be known). Interventions are soft. An intervention does not necessarily pin its target random variable to a fixed value (though it may, as a special case). It changes the relationship of a variable with its parents. Interventions do not stack. Before a new intervention is made, the previous one is fully retracted. This stops the SCM from wandering away from its initial, observational configuration after a long series of interventions. No control over interventions. The structure learning algorithm has control neither of the target, nor the nature of the next intervention on the SCM. For a detailed description of the interventions, refer to §A.2. 4.2 VARIATIONS AND PRIOR KNOWLEDGE In the problem setting above, the ground-truth SCM is completely opaque to us. However, we consider two interesting relaxations of this formulation: Complete or partial graph recovery. We may already know the existence of certain cause-effect edges and non-edges within the ground-truth SCM. If such prior information is available, it turns a complete graph recovery problem into one of partial graph recovery. Larger SCMs can be tackled if only parts of the graph need to be recovered. Known or unknown interventions: The interventions can either be known or unknown to the learned model. We demonstrate that the proposed method can naturally incorporate this prior information to improve its performance. 4.3 METHOD OVERVIEW The proposed method is a score-based, iterative, continuousoptimization method consisting of three phases that flow into one other (See Figure 2). During the three-phase procedure, a structural representation of a DAG and a functional representation of a set of independent causal mechanisms are trained jointly until convergence. Because the structural and functional parameters are not independent and do influence each other, we train them in alternating phases, a form of block coordinate descent optimization. 4.3.1 PARAMETRIZATION We distinguish two sets of parameters: The structural parameters γ and the functional parameters θ. Given a graph of M variables, we parametrize the structure γ as a matrix RM×M such that σ(γij) is our belief in random variable Xj being a direct cause of Xi, where σ(x) = 1/(1 + exp(−x)) is the sigmoid function. The matrix σ(γ) is thus a soft adjacency matrix. The set of functional parameters θi parametrizes the conditional probability distribution of Xi given its parent set Xpa(i,C), with C ∼ Ber(σ(γ)) a hypothesized configuration of the SCM’s DAG. 4.3.2 PHASE 1: GRAPH FITTING ON OBSERVATIONAL DATA During Phase 1, the functional parameters θ are trained to maximize the likelihood of randomly drawn observational data under graphs randomly drawn from our current beliefs about the edge structure. We draw graph configurations Cij ∼ Ber(σ(γij)) and batches of observational data from the unintervened ground-truth SCM, then maximize the log-likelihood of the batch under that configuration using SGD. The use of graph configurations sampling from Bernoulli distributions is analogous to dropout on the inputs of the functional models (in our implementation, MLPs), giving us an ensemble of neural networks that can model the observational data. 4.3.3 PHASE 2: GRAPH SCORING ON INTERVENTIONAL DATA During Phase 2, a number of graph configurations are sampled from the current edge beliefs parametrized by γ, and scored on data samples drawn from the intervention SCM. Intervention applied: At the beginning of Phase 2, an intervention is applied to the ground-truth SCM. This intervention is not under the control of the method. In our implementation, and unbeknownst to the model, the target variable is chosen uniformly randomly from all M variables throughout the optimization process. Intervention predicted: If the target of the intervention is not known, it is predicted using a simple heuristic. A small number of interventional data samples are drawn from the SCM and more graphs are sampled from our current edge beliefs. The average log-likelihood of each individual variable Xi across the samples is then computed using the functional model parameters θ fine-tuned on observational data in Phase 1. The variable Xi showing the greatest deterioration in log-likelihood is assumed to be the target because the observational distribution most poorly predicts that variable. If the target of the intervention is known, then this is taken as ground-truth knowledge for the purpose of subsequent steps, and no prediction needs to be done. Graphs Sampled and Scored: A new set of interventional data samples and graph configurations are now drawn from the intervention SCM and edge beliefs respectively. The log-likelihood of the data batches under the hypothesized configurations is computed, with one modification: The contribution to the total log-likelihood of a sample X coming from the target (or predicted-target) intervention variable Xi is masked. Because Xi was intervened upon (in the manner of a Pearl do-operation, soft or hard), the values one gets for that variable should be taken as givens, not as contributors to the total log-likelihood of the sample. As well, no gradient should be allowed to propagate into the variable’s learned functional parametrization θi, because it was not actually responsible for the outcome. Intervention retracted: After Phase 2, the intervention is retracted, per our modelling assumptions. 4.3.4 PHASE 3: CREDIT ASSIGNMENT TO STRUCTURAL PARAMETERS During Phase 3, the scores of the interventional data batches over various graph configurations are aggregated into a gradient for the structural parameters γ. Because a discrete Bernoulli random sampling process was used to sample graph configurations under which the log-likelihoods were computed, we require a gradient estimator to propagate gradient through to the γ structural parameters. Several alternatives exist, but we adopt for this purpose the REINFORCE-like gradient estimator gij proposed by Bengio et al. (2019): gij = ∑ k(σ(γij)− c (k) ij )LC(k),i (X)∑ k LC(k),i (X) , ∀i, j ∈ {0, . . . ,M−1} (2) where the (k) superscript indicates the values obtained for the k-th draw of C under the current edge beliefs parametrized by γ. Therefore, L(k)C,i(X) can be read as the log-likelihood of variable Xi in the data sample X under the k’th configuration, C(k), drawn from our edge beliefs. Using the estimated gradient, we then update γ with SGD, and return to Phase 1 of the continuous optimization process. The gradient estimator gij minimizes an implicit empirical risk objective with respect to γij . When the functional and structural parameters θ and γ are “sufficiently close” to their minima, the estimator gij empirically converges quickly towards that minimum γ∗ as shown in Figure 16 of Appendix A.13. Acyclic Constraint: We include a regularization term JDAG(γ) that penalizes length-2 cycles in the learned adjacency matrix σ(γ), with a tunable strength λDAG. The regularization term is JDAG(γ) =∑ i 6=j cosh(σ(γij)σ(γji)), ∀i, j ∈ {0, . . . ,M−1} and is derived from Zheng et al. (2018). The details of the derivation are in the Appendix. We explore several different values of λDAG and their effects in our experimental setup. Suppression of longer-length cycles was not found to be worthwhile for the increased computational expense. 5 EXPERIMENTAL SETUP AND RESULTS We first evaluate the proposed method on a synthetic dataset where we have control over the number of variables and causal edges in the ground-truth SCM. This allows us to analyze the performance of the proposed method under various conditions. We then evaluate the proposed method on real-world datasets from the BnLearn dataset repository. We also consider the two variations of §4.2: Recovering only part of the graph (when the rest is known), and exploiting knowledge of the intervention target. The summary of our findings is: 1) We show strong results for graph recovery for all synthetic graphs in comparisons with other baselines, measured by Hamming distance. 2) The proposed method achieves high accuracy on partial graph recovery for large, real-world graphs. 3) The proposed method’s intervention target prediction heuristic closes the gap between the known- and unknowntarget intervention scenarios. 4) The proposed method generalizes well to unseen interventions. 5) The proposed method’s time-to-solution scaling appears to be driven by the number of edges in the groundtruth graph moreso than the number of variables. 5.1 MODEL DESCRIPTION Learner model. Without loss of generality, we let θi = {W0i,B0i,W1i,B1i} define a stack of M one-hidden-layer MLPs, one for each random variable Xi. A more appropriate model, such as a CNN, can be chosen using domainspecific knowledge; the primary advantage of using MLPs is that the hypothesized DAG configurations cij can be readily used to mask the inputs of MLP i, as shown in Figure 3. To force the structural equation fi corresponding to Xi to rely exclusively on its direct ancestor set pa(i, C) under hypothesis adjacency matrix C (See Eqn. 1), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . An example of the multi-MLP architecture with M=4 categorical variables of N=3 categories is shown in Figure 3. For more details, refer to Appendix A.4. Ground-truth model. Ground-truth SCM models are parametrized either as CPTs with parameters from BnLearn (in the case of real-world graphs), or as a second stack of MLPs similar to the learner model, with randomly-initialized functional parameters θGT and the desired adjacency matrix γGT. Interventions. In all experiments, at most one (soft) intervention is concurrently performed. To simulate a soft intervention on variable Xi, we reinitialize its ground-truth conditional distribution’s MLP parameters or CPT table randomly, while leaving the other variables untouched. For more details about the interventions, please refer to Appendix A.2. 5.2 SYNTHETIC DATASETS EXPERIMENTS We first evaluate the model’s performance on several randomlyinitialized SCMs with specific, representative graph structures. Since the number of possible DAGs grows super-exponentially with the number of variables, for M=4 up to 13 a selection of representative and edge-case DAGs are chosen. chainM and fullM (M=3-13) are the minimallyand maximally-connected M -variable DAGs, while treeM and jungleM are tree-like intermediate graphs. colliderM is the (M−1)→ 1 collider graph. The details of the setup is in Appendix A.6. Results. The model can recover most synthetic DAGs with high accuracy, as measured by Structural Hamming Distance (SHD) between learned and ground-truth DAGs. Table 1 shows our proposed method outperforming all other baseline methods, and learns all graphs perfectly for 3 to 13 variables (excepting full). For DAGs ranging from 3 to 8 variables, the AUROCs all eventually reach 1.0 (indicating perfect classification into edge/non-edge; Refer to Figure 4). For both large (M > 10) and dense DAGs (e.g. full13) the model begins encountering difficulties, as shown in Table 1 and Appendix §A.6.1. Small graphs (M < 10) are less sensitive than larger ones to our hyperparameters, notably the sparsity and acyclic regularization (§4.3.4) terms. In §A.5, we perform an analysis of these hyperparameters. 5.3 REAL-WORLD DATASETS: BNLEARN The Bayesian Network Repository is a collection of commonly-used causal Bayesian networks from the literature, suitable for Bayesian and causal learning benchmarks. We evaluate the proposed method on the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010), Asia (Lauritzen & Spiegelhalter, 1988) and Sachs (Sachs et al., 2005) datasets (M =5, 5, 8 and 11-variables respectively, maximum in-degree 3) in the BnLearn dataset repository. Results. As shown in Table 1, the proposed method perfectly recovers the DAG of Asia, while making a small number of errors (SHD=6) for Sachs (11-variables). It thus significantly outperforms all other baselines models. Figures 8 & 9 visualize what the model has learned at several stages of learning. Results for Cancer and Asia can be found in the appendices, Figure 17 and 18. 5.4 COMPARISONS WITH OTHER METHODS As shown in Table 1, we compared the proposed SDI method to ICP ((Peters et al., 2016)), non-linear ICP ((Heinze-Deml et al., 2018b)), and (Eaton & Murphy, 2007b; Zheng et al., 2018; Yu et al., 2019) on Asia (Lauritzen & Spiegelhalter, 1988), Sachs (Sachs et al., 2005) and representative synthetic graphs. Eaton & Murphy (2007b) handles uncertain interventions and Peters et al. (2016), Heinze-Deml et al. (2018b) handles unknown interventions. However, neither attempts to predict the intervention. As shown in Table 1, we significantly outperform ICP, non-linear ICP, and the methods in (Yu et al., 2019) and (Zheng et al., 2018). Furthermore, Eaton & Murphy (2007b) runs out of memory for graphs larger than M = 10 because modelling of uncertain interventions is done using “shadow” random variables (as suggested by the authors), and thus recovering the DAG internally requires solving a d = 2M -variable problem. Their method’s extremely poor time- and space-scaling of O(d2d) makes it unusable beyond d > 20. For SDIs, we threshold our edge beliefs at σ(γ) = 0.5 to derive a graph, but the continued decrease of the cross-entropy loss (Figure 4) hints at SDI’s convergence onto the correct causal model. Please refer to Appendix §A.8 for full details and results. 5.5 GENERALIZATION TO PREVIOUSLY UNSEEN INTERVENTIONS It is often argued that machine learning approaches based purely on capturing joint distributions do not necessarily yield models that generalize to unseen experiments, since they do not explicitly model changes through interventions. By way of contrast, causal models use the concept of interventions to explicitly model changing environments and thus hold the promise of robustness under distributional shifts (Pearl, 2009; Schölkopf et al., 2012; Peters et al., 2017). To test the robustness of causal modelling to previously unseen interventions (new values for an intervened variable), we evaluate a well-trained causal model against a variant, non-causal model trained with cij = 1, i 6= j. An intervention is performed on the ground-truth SCM, fresh interventional data is drawn from it, and the models, with knowledge of the intervention target, are asked to predict the other variables given their parents. The average log-likelihoods of the data under both models are computed and contrasted. The intervention variable’s contribution to the loglikelihood is masked. For all 3-variable graphs (chain3, fork3, collider3, confounder3), the causal model attributes higher log-likelihood to the intervention distribution’s samples than the non-causal variant, thereby demonstrating causal models’ superior generalization ability in transfer tasks. Table 2 collects these results. 5.6 VARIANT: PREDICTING INTERVENTIONS In Phase 2 (§4.3.3), we use a simple heuristic to predict the intervention target variable. Experiments show that this heuristic functions well in practice, yielding correct predictions far more often than by chance alone (Table 3). Guessing the intervention variable randomly, or not guessing it at all, leads to a significant drop in the model performance, even for 3-variable graphs (Fig. 11 Left). Training SDI with intervention prediction closely tracks training with leaked knowledge of the ground-truth intervention on larger, 7-variable graphs (Fig. 11 Right). 5.7 VARIANT: PARTIAL GRAPH RECOVERY Instead of learning causal structures de novo, we may have partial information about the ground-truth SCM and may only need to fill in missing information (§4.2). An example is protein structure discovery in biology, where some causal relationships have been definitely established and others remain open hypotheses. This is an easier task compared to full graph recovery, since the model only has to search for missing edges. Table 4: Partial Graph Recovery on Alarm (Beinlich et al., 1989) and Barley (Kristensen & Rasmussen, 2002). The model is asked to predict 50 edges in Barley and 40 edges in Alarm. The accuracy is measured in Structural Hamming Distance (SHD). SDI achieved over 90% accuracy on both graphs. Graph Alarm Barley Number of variables 37 48 Total Edges 46 84 Edges to recover 40 50 Recovered Edges 37 45 Errors (in SHD) 3 5 We evaluate the proposed method on Barley (Kristensen & Rasmussen, 2002) (M = 48) and Alarm (Beinlich et al., 1989) (M = 37) from the BnLearn repository. The model is asked to predict 50 edges from Barley and 40 edges from Alarm. The model reached ≥ 90% accuracy on both datasets, as shown in Table 4. 5.8 ABLATION AND ANALYSIS As shown in Figure 12, larger graphs (such as M > 6) and denser graphs (such as full8) are progressively more difficult to learn. For denser graphs, the learned models have higher sample complexity, higher variance and slightly worse results. Refer to Appendix §A.9 for complete results on all graphs. Hyperparameters. Hyperparameters for all experiments were kept identical unless otherwise stated. We study the effect of DAG and sparsity penalties in the following paragraph. For more details, please refer to Appendix §A.5 . Importance of regularization. Valid configurations C for a causal model are expected to be a) sparse and b) acyclic. To promote such solutions, we use DAG and sparsity regularization with tunable hyperparameters. We set the DAG penalty to 0.5 and sparsity penalty to 0.1. We run ablation studies on different values of the regularizers and study their effect. We find that smaller graphs are less sensitive to different values of regularizer than larger graphs. For details, refer to Appendix §A.12. Importance of dropout. To train functional parameter for an observational distribution, sampling adjacency matrices is required. We "drop out" each edge (with a probability of σ(γ)) in our experiments during functional parameter training of the conditional distributions of the SCM. Please refer to Appendix §A.14 for a more detailed analysis. 6 CONCLUSION In this work, we introduced an experimentally successful method (SDI) for causal structure discovery using continuous optimization, combining information from both observational and interventional data. We show in experiments that it can recover true causal structure, that it generalizes well to unseen interventions, that it compares very well against start-of-the-art causal discovery methods on real world datasets, and that it scales even better on problems where only part of the graph is known. Appendix Table of Contents A Annexes 13 A.1 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.4 Model setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.5 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.6 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.7 BnLearn data repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.8 Comparisons to other methods . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.9 Sparsity of Ground-Truth Graph . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.10 Predicting interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.11 Sample complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.12 Effect of regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.13 Near-Optimum Performance of Gradient Estimator . . . . . . . . . . . . . . . . 20 A.14 Importance of dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A ANNEXES A.1 TRAINING ALGORITHM Algorithm 1 shows the pseudocode of the method described in §4. Typical values for the loop trip counts are found in §A.11. A.2 PRELIMINARIES Interventions. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph intervention data is needed (Eberhardt et al., 2012). Several types of common interventions may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth causal model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft interventions for several reasons: First, they include hard interventions as a limiting case and hence are more general. Second, in many real-world scenarios, it is more difficult to perform a hard intervention compared to a soft one. We also deal with a special case of uncertain interventions, where the variable selected for intervention is random and unknown. We call these unidentified or unknown interventions. Intervention setup. For our experiments, the groundtruth models of the synthetic datasets are modeled by neural networks as described in section A.6. Each neural network models the relationship of the causal parents and a variable. We perform our intervention by first randomly selecting which variable to intervene on, then soft-intervening on it. The selected variable is sampled from a uniform distribution. The soft intervention is a reinitialization of its neural network’s parameters. Causal sufficiency. The inability to distinguish which causal graph, within a Markov equivalence class, is the correct one in the purely-observational setting is called the identifiability problem. In our setting, all variables are observed (there are no latent confounders) and all interventions are random and independent. Hence, within our setting, if the interventions are known, then the true causal Algorithm 1 Training Algorithm 1: procedure TRAINING(SCM Ground-Truth Entailed Distribution D, with M nodes and N categories) 2: Let i an index from 0 to M − 1 3: for I iterations, or until convergence, do 4: if I % reinitialization_period == 0 then 5: D ← reinitialize(D) 6: for F functional parameter training steps do . Phase 1 7: X ∼ D 8: C ∼ Ber(σ(γ)) 9: L = − logP (X|C ; θ) 10: θt+1 ← Adam(θt,∇θL) 11: for Q interventions do . Phase 2 12: I_N← randint(0, M − 1) . Uniform selection of target 13: Dint :=D with intervention on node I_N . Apply intervention 14: if predicting intervention then . Phase 2 Prediction 15: Li ← 0 ∀i 16: for NP prediction steps do 17: X ∼ Dint 18: for CP configurations do 19: C ∼ Ber(σ(γ)) 20: Li ← Li − logPi(X|Ci; θslow) ∀i 21: I_N← argmax(Li) 22: gammagrads, logregrets = [], [] . Phase 2 Scoring 23: for NS scoring steps do 24: X ∼ Dint 25: gammagrad, logregret = 0, 0 26: for CS configurations do 27: C ∼ Ber(σ(γ)) 28: Li = − logPi(X|Ci; θslow) ∀i 29: gammagrad += σ(γ)− C . Collect σ(γ)− C for Equation 2 30: logregret += ∑ i6=I_N Li . Collect LC(k),i (X) for Equation 2 31: gammagrads.append(gammagrad) 32: logregrets.append(logregret) . Phase 3 33: gij = ∑ k(σ(γij)− c (k) ij )LC (k) ,i (X)∑ k LC (k) ,i (X) . Gradient Estimator, Equation 2 34: g ← g +∇γ (λsparse Lsparse(γ) + λDAG LDAG(γ)) . Regularizers 35: γt+1 ← Adam(γt, g) graph is always identifiable in principle (Eberhardt et al., 2012; Heinze-Deml et al., 2018a). We also consider here situations where a single variable is randomly selected and intervened upon with a soft or imprecise intervention, its identity is unknown and must be inferred. In this case, there is no theoretical guarantee that the causal graph is identifiable. However, there is existing work Peters et al. (2016) that handles this scenario and the proposed method is also proven to work empirically. Faithfulness. It is possible for causally-related variables to be probabilistically independent purely by happenstance, such as when causal effects along multiple paths cancel out. This is called unfaithfulness. We assume that faithfulness holds, since the γ gradient estimate is extracted from shifts in probability distributions. However, because of the “soft” nature of our interventions and their infinite variety, it would be exceedingly unlikely for cancellation-related unfaithfulness to persist throughout the causal-learning procedure. A.3 EXPERIMENTAL SETUP For all datasets, the weight parameters for the learned model is initialized randomly. In order to not bias the structural parameters, all σ(γ) are initialized to 0.5 in the beginning of training. Details of hyperparameters of the learner model are described in Section A.5. The experimental setup for the groundtruth model for the synthetic data can be found in Section A.6 and the details for the real world data are described in Section A.7. A.4 MODEL SETUP As discussed in section 4, we model the M variables in the graph using M independent MLPs, each possesses an input layer of M × N neurons (for M one-hot vectors of length N each), a single hidden layer chosen arbitrarily to have max(4M, 4N) neurons with a LeakyReLU activation of slope 0.1, and a linear output layer of N neurons representing the unnormalized log-probabilities of each category (a softmax then recovers the conditional probabilities from these logits). To force fi to rely exclusively on the direct ancestor set pa(i, C) under adjacency matrix C (See Eqn. 2), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . The functional parameters of the MLP are the set θ = {W0ihjn,B0ih,W1inh,B1in}.An example of the multi-MLP architecture with M=3 categorical variables of N=2 categories is shown in Figure 3. A.5 HYPERPARAMETERS Learner model. All experiments on the synthetic graphs of size 3-8 use the same hyperparameters. Both the functional and structural parameters are optimized using the Adam optimizer Kingma & Ba (2014). We use a learning rate of 5e− 2 with alpha of 0.9 for the functional parameters, and we use a learning rate of 5e− 3 with alpha of 0.1 for the structural parameters. We perform 5 runs of each experiment with random seeds 1− 5 and error bars are plotted for various graphs from size 3 to 8 in Figure 4. We use a batch size of 256. The L1 norm regularizer is set to 0.1 and the DAG regularizer is set to 0.5 for all experiments. For each γ update step, we sample 25 structural configurations from the current γ. In all experiments, we use 100 batches from the interventional distribution to predict the intervened node. A.6 SYNTHETIC DATA Synthetic datasets. The synthetic datasets in the paper are modeled by neural networks. All neural networks are 2 layered feed forward neural networks (MLPs) with Leaky ReLU activations between layers. The parameters of the neural network are initialized orthogonally within the range of (−2.5, 2.5). This range was selected such that they output a non-trivial distribution. The biases are initialized uniformly between (−1.1, 1.1). SCM with n variables are modeled by n feedforward neural networks (MLPs) as described in §5.1. We assume an acyclic causal graph so that we may easily sample from them. Hence, given any pair of random variables A and B, either A −→ B, B −→ A or A and B are independent. The MLP representing the ground-truth SCM has its weights θ initialized use orthogonal initialization with gain 2.5 and the biases are initialized using a uniform initialization between−1.1 and 1.1, which was empirically found to yield "interesting" yet learnable random SCMs. We study a variety of SCMs with different ground-truth edge structures γ. Our selection of synthetic graphs explores various extremes in the space of DAGs, stress-testing SDI. The chain graphs are the sparsest connected graphs possible, and are relatively easy to learn. The bidiag graphs are extensions of chain where there are 2-hops as well as single hops between nodes, doubling the number of edges and creating a meshed chain of forks and colliders. The jungle graphs are binary-tree-like graphs, but with each node connected directly to its grandparent in the tree as well. Half the nodes in a jungle graph are leaves, and the out-degree is up to 6. The collider graphs deliberately collide independent M − 1 ancestors into the last node; They stress maximum in-degree. Lastly, the full graphs are the maximally dense DAGs. All nodes are direct parents of all nodes below them in the topological order. The maximum in- and out-degree are both M − 1. These graphs are depicted in Figure 6. A.6.1 SYNTHETIC DATA RESULTS The model can recover correctly all synthetic graphs with 10 variables or less, as shown in Figure 10 and Table 1. For graphs larger than 10 variables, the model found it more challenging to recover the denser graphs (e.g. fullM), as shown in Table 1. Plots of the training curves showing average cross entropy (CE) and Area-Under-Curve(AUC/AUCROC) for edge probabilities of the learned graph against the ground-truth graph for synthetic SCMs with 3-13 variables are available in Figure 10. A.7 BNLEARN DATA REPOSITORY The repo contains many datasets with various sizes and structures modeling different variables. We evaluate the proposed method on 3 of the datasets in the repo, namely the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010) and Asia (Lauritzen & Spiegelhalter, 1988) datasets. The ground-truth model structure for the Cancer (Korb & Nicholson, 2010) and Earthquake (Korb & Nicholson, 2010) datasets are shown in Figure 7. Note that even though the structure for the two datasets seems to be the same, the conditional probability tables (CPTs) for these datasets are very different and hence results in different structured causal models (SCMs) for each. A.8 COMPARISONS TO OTHER METHODS As described in section 5.4, we compare to 5 other methods. The full comparison between SDIs and other methods on various graphs can be found in Table 1. One of these methods, DAG-GNN Yu et al. (2019), outputs 3 graphs based on different criteria: best mean square error (MSE), best negative loglikelihood (NLL) and best evidence lower bound (ELBO). We report performance of all outputs of DAG-GNN Yu et al. (2019) in Table 6, and the best one is selected for Table 1. A.9 SPARSITY OF GROUND-TRUTH GRAPH We evaluated the performance of SDI on graphs of various size and sparsity to better understand the performance of the model. We evaluated the proposed model on 4 representative types of graphs in increasing order of density. They are the chain, jungle, bidiag and full graphs. As shown in the results in figure 12, for graphs of size 5 or smaller, there is almost no difference in the final results in terms of variance and sample complexity. However, as the graphs gets larger (than 6), the denser graphs (full graphs) gets progressively more difficult to learn compared to the sparser graphs (chain, jungle and bidiag). The models learned for denser graphs have higher complexity, higher variance and slightly worse results. A.10 PREDICTING INTERVENTIONS In Phase 2, we score graph configurations based on how well they fit the interventional data. We find that it is necessary to avoid disturbing the learned parameters of intervened variables, and to ignore its contribution to the total negative log-likelihood of the sample. Intuitively, this is because, having been intervened upon, that variable should be taken as a given. It should especially not be interpreted as a poorly-learned variable requiring a tuning of its functional parameters, because those functional parameters were not responsible for the value of that variable; The extrinsic intervention was. Since an intervened variable is likely to be unusually poorly predicted, we heuristically determine that the most poorly predicted variable is the intervention variable. We then zero out its contribution to the log-likelihood of the sample and block gradient into its functional parameters. Figure 11 illustrates the necessity of this process. When using the prediction heuristic, the training curve closely tracks training with ground-truth knowledge of the identity of the intervention. If no prediction is made, or a random prediction is made, training proceeds much more slowly, or fails entirely. A.11 SAMPLE COMPLEXITY Our method is heavily reliant on sampling of configurations and data in Phases 1 and 2. We present here the breakdown of the sample complexity. Let • I be the number of iterations of the method, (typical: 500-2000) • B the number of samples per batch, (typical: 256) • F the number of functional parameter training iterations in Phase 1, (typical: 10000) • Q the number of interventions performed in Phase 2, (typical: 100) • NP the number of data batches for prediction, (typical: 100) • CP the number of graph configurations drawn per prediction data batch, (typical: 10) • NS the number of data batches for scoring, (typical: 10) • CS the number of graph configurations drawn per scoring data batch. (typical: 20-30) Then the total number of interventions performed, and configurations and samples drawn, over an entire run are: Interventions = IQ = γ updates (3) Samples = I( F︸︷︷︸ Phase 1 +Q(NP +NS)︸ ︷︷ ︸ Phase 2 )B (4) Configurations = I( F︸︷︷︸ Phase 1 +Q(CPNP + CSNS)︸ ︷︷ ︸ Phase 2 ) (5) Because of the multiplicative effect of these factors, the number of data samples required can quickly spiral out of control. For typical values, as many as 500 × 10000 × 256 = 1.28e9 observational and 500 × 100 × (100 + 10) × 256 = 1.408e9 interventional samples are required. To alleviate this problem slightly, we limit the number of samples generated for each intervention; This limit is usually 500-2000. A.12 EFFECT OF REGULARIZATION Importance of sparsity regularizer. We use a L1 regularizer on the structure parameters γ to encourage a sparse representation of edges in the causal graph. In order to better understand the effect of the L1 regularizer, we conducted ablation studies on the L1 regularizer. It seems that the regularizer has an small effect on rate of converges and that the model converges faster with the regularizer, This is shown in Figure 13. However, this does not seem to affect the final value the model converges to, as is shown in Table 7. Importance of DAG regularizer. We use an acyclic regularizer to discourage length-2 cycles in the learned model. We found that for small models (≤ 5 variables), the acyclic regularizer helps with faster convergence, without improving significantly the final cross-entropy. This is illustrated for the 3-variable graphs in Figure 14. However, for graphs larger than 5 variables, the acyclic regularizer starts playing an important role in encouraging the model to learn the correct structure. This is shown in the ablation study in Table 7. A.13 NEAR-OPTIMUM PERFORMANCE OF GRADIENT ESTIMATOR The gradient estimator gij we use to minimize the empirical risk w.r.t. the structural parameters γ, defined in Eq. 2 is adapted from Bengio et al. (2019). We verify that the estimator samples the correct gradient by an experiment that tests convergence near the optimum. To do this, we pre-initialize the structural and functional parameters near the global minimum, and verify that γ converges. Specifically, the ground-truth functional parameters θ are copied and disturbed by a small Gaussian noise, while the ground-truth structural parameters γ are copied, but the confidences in an edge or non-edge are set to 88% and 12% rather than 100% and 0%. The experiment is then expected to quickly converge to the global minimum. As shown in Figure 16, the gradient estimator correctly enables Stochastic Gradient Descent towards the minimum, for the chain and jungle graphs of size 15, 20 and 25. The average cross-entropy rapidly approaches its floor of 0.01, a consequence of our clamping of all γij to the range ±5 (equivalently, clamping σ(γij) to the range [0.0067, 0.9933]). A.14 IMPORTANCE OF DROPOUT To train the functional parameters on an observational distribution, one would need sampling adjacency matrices. One may be tempted to make these “complete directed graph” (all-ones except for a zero diagonal), to give the MLP maximum freedom to learn any potential causal relations itself. We demonstrate that functional parameter training cannot be carried out this way, and that it is necessary to “drop out” each edge (with probability of the current γ value in our experiments) during pretraining of the conditional distributions of the SCM. We attempt to recover the previously-recoverable graphs chain3, fork3 and confounder3 without dropout, but fail to do so, as shown in Figure 15. Figure 17: Cross-entropy for edge probability between learned and ground-truth SCM for Cancer at varying temperatures. Figure 18: Cross-entropy for edge probability between learned and ground-truth SCM. Left: The Earthquake dataset with 6 variables. Right: The Asia dataset with 8 variables
1. What is the focus of the paper regarding structure learning from observational and interventional data? 2. What are the concerns regarding the method's ability to handle limited interventional datasets and samples? 3. How does the reviewer assess the effectiveness and efficiency of the proposed three-phase score-based iterative procedure? 4. Are there any questions about the output of the algorithm and its relationship to the Interventional Markov equivalence class? 5. What are the limitations of the method regarding its applicability to large graphs and its potential to return cyclic structures? 6. Do you have any suggestions for improving the method's performance and addressing its heuristic aspects? 7. How does the reviewer evaluate the novelty and significance of the proposed approach compared to prior works in the field?
Review
Review The authors propose a method for structure learning from observational and interventional data that uses a continuous optimization method. Data is discrete-valued, there are no hidden confounders, each intervention affects only one variable, but the location of it may be unknown. A three-phase score-based, iterative procedure is proposed. This work considers that in each interventional dataset, only one variable is intervened on. If we do not know about the target of the intervention, it seems reasonable that we also assume that we are not aware of the number of the targets. Unfortunately there are no results in the paper about what the output of the algorithm will actually be. Suppose we have only few interventional datasets (which is usually the case in reality). What can we say about the output of the algorithm? It is known that in this case, Interventional Markov equivalence class is the extent of identifiability [Hauser and Bulmann, 2012]. Can we hope that the algorithm returns an element from this class? In the Appendix, it is mentioned that the method typically requires 500-2000 iterations and 100 interventions per iteration. This means that around 10^5 interventions are needed. Also about 10^9 samples are needed. We note that in reality for example in medical data, we usually have access to very few interventional datasets each containing about 100 samples. It is not clear how the method performs on a graph with no prior structure knowledge with 30 vertices (which is a number that is usually not considered large in structure learning). Seems like this order is too large for the proposed method. The intervention prediction step in Phase 2 sounds very heuristic and is not clear under what conditions it will work. Also, it seems that it requires strong interventions. Regarding preventing the algorithm from returning cyclic structures, the authors state that suppression of more than length 2 cycles was not found to be worthwhile for the increased computational expense. This simply means that the algorithm may return cyclic structures which is contradictory to the original goal. There are other work on learning from interventions with unknown targets, for example: [Squires et al., Permutation-Based Causal Structure Learning with Unknown Intervention Targets], or [Huang et al., Causal Discovery from Heterogeneous/Nonstationary Data]. The definition of SCM given in the Introduction is only true for the case of causal sufficiency.
ICLR
Title A Fine-Grained Spectral Perspective on Neural Networks Abstract Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? How to set the range for learning rate tuning? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the Conjugate Kernel, CK, (also called the Neural Network-Gaussian Process Kernel), and the Neural Tangent Kernel, NTK. Roughly, the CK and the NTK tell us respectively “what a network looks like at initialization” and “what a network looks like during and after training.” Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at github.com/jxVmnLgedVwv6mNcGCBy/NNspectra. 1 INTRODUCTION Understanding the behavior of neural networks and why they generalize has been a central pursuit of the theoretical deep learning community. Recently, Valle-Pérez et al. (2018) observed that neural networks have a certain “simplicity bias” and proposed this as a solution to the generalization question. One of the ways with which they argued that this bias exists is the following experiment: they drew a large sample of boolean functions by randomly initializing neural networks and thresholding the output. They observed that there is a bias toward some "simple" functions which get sampled disproportionately more often. However, their experiments were only done for relu networks. Can one expect this “simplicity bias” to hold universally, for any architecture? A priori, this seems difficult, as the nonlinear nature seems to present an obstacle in reasoning about the distribution of random networks. However, this question turns out to be more easily treated if we allow the width to go to infinity. A long line of works starting with Neal (1995) and extended recently by Lee et al. (2018); Novak et al. (2018); Yang (2019) have shown that randomly initialized, infinite-width networks are distributed as Gaussian processes. These Gaussian processes also describe finite width random networks well (Valle-Pérez et al., 2018). We will refer to the corresponding kernels as the Conjugate Kernels (CK), following the terminology of Daniely et al. (2016). Given the CK K, the simplicity bias of a wide neural network can be read off quickly from the spectrum of K: If the largest eigenvalue of K accounts for most of trK, then a typical random network looks like a function from the top eigenspace of K. In this paper, we will use this spectral perspective to probe not only the simplicity bias, but more generally, questions regarding how hyperparameters affect the generalization of neural networks. Via the usual connection between Gaussian processes and linear models with features, the CK can be thought of as the kernel matrix associated to training only the last layer of a wide randomly initialized network. It is a remarkable recent advance (Jacot et al., 2018; Allen-Zhu et al., 2018a;c; Du et al., 2018) that, under a certain regime, a wide neural network of any depth evolves like a linear model even when training all parameters. The associated kernel is call the Neural Tangent Kernel, which is typically different from CK. While its theory was initially derived in the infinite width setting, Lee et al. (2019) confirmed with extensive experiment that this limit is predictive of finite width neural networks as well. Thus, just as the CK reveals information about what a network looks like at initialization, NTK reveals information about what a network looks like after training. As such, if we can understand how hyperparameters change the NTK, we can also hope to understand how they affect the performance of the corresponding finite-width network. Our Contributions In this paper, in addition to showing that the simplicity bias is not universal, we will attempt a first step at understanding the effects of the hyperparameters on generalization from a spectral perspective. At the foundation is a spectral theory of the CK and the NTK on the boolean cube. In Section 3, we show that these kernels, as integral operators on functions over the boolean cube, are diagonalized by the natural Fourier basis, echoing similar results for over the sphere (Smola et al., 2001). We also partially diagonalize the kernels over standard Gaussian, and show that, as expected, the kernels over the different distributions (boolean cube, sphere, standard Gaussian) behave very similarly in high dimensions. However, the spectrum is much easier to compute over the boolean cube: while the sphere and Gaussian eigenvalues would require integration against a kind of polynomials known as the Gegenbauer polynomials, the boolean ones only require calculating a linear combination of a small number of terms. For this reason, in the rest of the paper we focus on analyzing the eigenvalues over the boolean cube. Just as the usual Fourier basis over R has a notion of frequency that can be interpreted as a measure of complexity, so does the boolean Fourier basis (this is just the degree; see Section 3.1). While not perfect, we adopt this natural notion of complexity in this work; a “simple” function is then one that is well approximated by “low frequencies.” This spectral perspective immediately yields that the simplicity bias is not universal (Section 4). In particular, while it seems to hold more or less for relu networks, for sigmoidal networks, the simplicity bias can be made arbitrarily weak by changing the weight variance and the depth. In the extreme case, the random function obtained from sampling a deep erf network with large weights is distributed like a “white noise.” However, there is a very weak sense in which the simplicity bias does hold: the eigenvalues of more “complex” eigenspaces cannot be bigger than those of less “complex” eigenspaces (Thm 4.1). Next, we examine how hyperparameters affect the performance of neural networks through the lens of NTK and its spectrum. To do so, we first need to understand the simpler question of how a kernel affects the accuracy of the function learned by kernel regression. A coarse-grained theory, concerned with big-O asymptotics, exists from classical kernel literature (Yao et al., 2007; Raskutti et al., 2013; Wei et al.; Lin and Rosasco; Schölkopf and Smola, 2002). However, the fine-grained details, required for discerning the effect of hyperparameters, have been much less studied. We make a first attempt at a heuristic, fractional variance (i.e. what fraction of the trace of the kernel does an eigenspace contribute), for understanding how a minute change in kernel effects a change in performance. Intuitively, if an eigenspace has very large fractional variance, so that it accounts for most of the trace, then a ground truth function from this eigenspace should be very easy to learn. Using this heuristic, we make two predictions about neural networks, motivated by observations in the spectra of NTK and CK, and verify them with extensive experiments. • Deeper networks learn more complex features, but excess depth can be detrimental as well. Spectrally, depth can increase fractional variance of an eigenspace, but past an optimal depth, it will also decrease it. (Section 5) Thus, deeper is not always better. • Training all layers is better than training just the last layer when it comes to more complex features, but the opposite is true for simpler features. Spectrally, fractional variances of more “complex” eigenspaces for the NTK are larger than the correponding quantities of the CK. (Section 6) Finally, we use our spectral theory to predict the maximal nondiverging learning rate (“max learning rate”) of SGD (Section 7). In general, we will not only verify our theory with experiments on the theoretically interesting distributions, i.e. uniform measures over the boolean cube and the sphere, or the standard Gaussian, but also confirm these findings on real data like MNIST and CIFAR10 1. 1The code for computing the eigenvalues and for reproducing the plots of this paper is available at github. com/jxVmnLgedVwv6mNcGCBy/NNspectra, which will be open sourced upon publication. For space concerns, we review relevant literature along the flow of the main text, and relegate a more complete discussion of the related research landscape in Appendix A. 2 KERNELS ASSOCIATED TO NEURAL NETWORKS As mentioned in the introduction, we now know several kernels associated to infinite width, randomly initialized neural networks. The most prominent of these are the neural tangent kernel (NTK) (Jacot et al., 2018) and the conjugate kernel (CK) (Daniely et al., 2016), which is also called the NNGP kernel (Lee et al., 2018). We briefly review them below. First we introduce the following notation that we will repeatedly use. Definition 2.1. For φ : R→ R, write Vφ for the function that takes a PSD (positive semidefinite) kernel function to a PSD kernel of the same domain by the formula Vφ(K)(x, x ′) = E f∼N (0,K) φ(f(x))φ(f(x′)). Conjugate Kernel Neural networks are commonly thought of as learning a high-quality embedding of inputs to the latent space represented by the network’s last hidden layer, and then using its final linear layer to read out a classification given the embedding. The conjugate kernel is just the kernel associated to the embedding induced by a random initialization of the neural network. Consider an MLP with widths {nl}l, weight matrices {W l ∈ Rn l×nl−1}l, and biases {bl ∈ Rn l}l, l = 1, . . . , L. For simplicity of exposition, in this paper, we will only consider scalar output nL = 1. Suppose it is parametrized by the NTK parametrization, i.e. its computation is given recursively as h1(x) = σw√ n0 W 1x+ σbb 1 and hl(x) = σw√ nl−1 W lφ(hl−1(x)) + σbb l (MLP) with some hyperparameters σw, σb that are fixed throughout training2. At initialization time, suppose W lαβ , b l α ∼ N (0, 1) for each α ∈ [nl], β ∈ [nl−1]. It can be shown that, for each α ∈ [nl], hlα is a Gaussian process with zero mean and kernel function Σl in the limit as all hidden layers become infinitely wide (nl →∞, l = 1, . . . , L− 1), where Σl is defined inductively on l as Σ1(x, x′) def = σ2w(n 0)−1〈x, x′〉+ σ2b , Σl def = σ2wVφ(Σ l−1) + σ2b (CK) The kernel ΣL corresponding the the last layer L is the network’s conjugate kernel, and the associated Gaussian process limit is the reason for its alternative name Neural Network-Gaussian process kernel. In short, if we were to train a linear model with features given by the embedding x 7→ hL−1(x) when the network parameters are randomly sampled as above, then the CK is the kernel of this linear model. See Daniely et al. (2016); Lee et al. (2018) and Appendix F for more details. Neural Tangent Kernel On the other hand, the NTK corresponds to training the entire model instead of just the last layer. Intuitively, if we let θ be the entire set of parameters {W l}l ∪ {bl}l of Eq. (MLP), then for θ close to its initialized value θ0, we expect hL(x; θ)− hL(x; θ0) ≈ 〈∇θhL(x; θ0), θ − θ0〉 via a naive first-order Taylor expansion. In other words, hL(x; θ)− hL(x; θ0) behaves like a linear model with feature of x given by the gradient taken w.r.t. the initial network, ∇θhL(x; θ0), and the weights of this linear model are the deviation θ− θ0 of θ from its initial value. It turns out that, in the limit as all hidden layer widths tend to infinity, this intuition is correct (Jacot et al., 2018; Lee et al., 2018; Yang, 2019), and the following inductive formula computes the corresponding infinite-width kernel of this linear model: Θ1 def = Σ1, Θl(x, x′) def = Σl(x, x′) + σ2wΘ l−1(x, x′)Vφ′(Σ l−1)(x, x′). (NTK) Computing CK and NTK While in general, computing Vφ and Vφ′ requires evaluating a multivariate Gaussian expectation, in specific cases, such as when φ = relu or erf , there exists explicit, efficient formulas that only require pointwise evaluation of some simple functions (see Facts F.1 and F.2). This allows us to evaluate CK and NTK on a set X of inputs in only time O(|X |2L). 2SGD with learning rate α in this parametrization is roughly equivalent to SGD with learning rate α/width in the standard parametrization with Glorot initialization; see Lee et al. (2018) What Do the Spectra of CK and NTK Tell Us? In summary, the CK governs the distribution of a randomly initialized neural network and also the properties of training only the last layer of a network, while the NTK governs the dynamics of training (all parameters of) a neural network. A study of their spectra thus informs us of the “implicit prior” of a randomly initialized neural network as well as the “implicit bias” of GD in the context of training neural networks. In regards to the implicit prior at initialization, we know from Lee et al. (2018) that a randomly initialized network as in Eq. (MLP) is distributed as a Gaussian process N (0,K), where K is the corresponding CK, in the infinite-width limit. If we have the eigendecomposition K = ∑ i≥1 λiui ⊗ ui (1) with eigenvalues λi in decreasing order and corresponding eigenfunctions ui, then each sample from this GP can be obtained as ∑ i≥1 √ λiωiui, ωi ∼ N (0, 1). If, for example, λ1 ∑ i≥2 λi, then a typical sample function is just a very small perturbation of u1. We will see that for relu, this is indeed the case (Section 4), and this explains the “simplicity bias” in relu networks found by Valle-Pérez et al. (2018). Training the last layer of a randomly initialized network via full batch gradient descent for an infinite amount of time corresponds to Gaussian process inference with kernel K (Lee et al., 2018; 2019). A similar intuition holds for NTK: training all parameters of the network (Eq. (MLP)) for an infinite amount of time yields the mean prediction of the GPN (0,NTK) in expectation; see Lee et al. (2019) and Appendix F.4 for more discussion. Thus, the more the GP prior (governed by the CK or the NTK) is consistent with the ground truth function f∗, the more we expect the Gaussian process inference and GD training to generalize well. We can measure this consistency in the “alignment” between the eigenvalues λi and the squared coefficients a2i of f ∗’s expansion in the {ui}i basis. The former can be interpreted as the expected magnitude (squared) of the ui-component of a sample f ∼ N (0,K), and the latter can be interpreted as the actual magnitude squared of such component of f∗. In this paper, we will investigate an even cleaner setting where f∗ = ui is an eigenfunction. Thus we would hope to use a kernel whose ith eigenvalue λi is as large as possible. Neural Kernels From the forms of the equation Eqs. (CK) and (NTK) and the fact that Vφ(K)(x, x ′) only depends on K(x, x),K(x, x′), and K(x′, x′), we see that CK or NTK of MLPs takes the form K(x, y) = Φ ( 〈x, y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) (2) for some function Φ : R3 → R. We will refer to this kind of kernel as Neural Kernel in this paper. Kernels as Integral Operators We will consider input spaces of various forms X ⊆ Rd equipped with some probability measure. Then a kernel function K acts as an integral operator on functions f ∈ L2(X ) by Kf(x) = (Kf)(x) = E y∼X K(x, y)f(y). We will use the “juxtaposition syntax” Kf to denote this application of the integral operator. 3 Under certain assumptions, it then makes sense to speak of the eigenvalues and eigenfunctions of the integral operator K. While we will appeal to an intuitive understanding of eigenvalues and eigenfunctions in the main text below, we include a more formal discussion of Hilbert-Schmidt operators and their spectral theory in Appendix G for completeness. In the next section, we investigate the eigendecomposition of neural kernels as integral operators over different distributions. 3In cases when X is finite, K can be also thought of as a big matrix and f as a vector — but do not confuse Kf with their multiplication! If we use · to denote matrix multiplication, then the operator application Kf is the same as the matrix multiplication K ·D · f where D is the diagonal matrix encoding the probability values of each point in X . 3 THE SPECTRA OF NEURAL KERNELS 3.1 BOOLEAN CUBE We first consider a neural kernelK on the boolean cubeX = ddef= {±1}d, equipped with the uniform measure. In this case, since each x ∈ X has the same norm, K(x, y) = Φ ( 〈x,y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) effectively only depends on 〈x, y〉, so we will treat Φ as a single variate function in this section, Φ(c) = Φ(c, 1, 1). Brief review of basic Fourier analysis on the boolean cube d (O’Donnell (2014)). The space of real functions on d forms a 2d-dimensional space. Any such function has a unique expansion into a multilinear polynomial (polynomials whose monomials do not contain xpi , p ≥ 2, of any variable xi). For example, the majority function over 3 bits has the following unique multilinear expansion maj3 : 3 → 1, maj3(x1, x2, x3) = 1 2 (x1 + x2 + x3 − x1x2x3). In the language of Fourier analysis, the 2d multilinear monomial functions χS(x) def = xS def = ∏ i∈S xi, for each S ⊆ [d] (3) form a Fourier basis of the function space L2( d) = {f : d → R}, in the sense that their inner products satisfy E x∼ d χS(x)χT (x) = I(S = T ). Thus, any function f : d → R can be always written as f(x) = ∑ S⊆[d] f̂(S)χX(x) for a unique set of coefficients {f̂(S)}S⊆[d]. It turns out that K is always diagonalized by this Fourier basis {χS}S⊆[d]. Theorem 3.1. On the d-dimensional boolean cube d, for every S ⊆ [d], χS is an eigenfunction of K with eigenvalue µ|S| def = E x∈ d xSK(x,1) = E x∈ d xSΦ (∑ i xi/d ) , (4) where 1 = (1, . . . , 1) ∈ d. This definition of µ|S| does not depend on the choice S, only on the cardinality of S. These are all of the eigenfunctions of K by dimensionality considerations.4 Define T∆ to be the shift operator on functions over [−1, 1] that sends Φ(·) to Φ(· −∆). Then we can re-express the eigenvalue as follows. Lemma 3.2. With µk as in Thm 3.1, µk = 2 −d(I − T∆)k(I + T∆)d−kΦ(1) (5) = 2−d d∑ r=0 Cd−k,kr Φ (( d 2 − r ) ∆ ) (6) where Cd−k,kr def = ∑ j=0 (−1)r+j ( d− k j )( k r − j ) . (7) Eq. (5) will be important for computational purposes, and we will come back to discuss this more in Section 3.5. It also turns out µk affords a pretty expression via the Fourier series coefficients of Φ. As this is not essential to the main text, we relegate its exposition to Appendix H.1. 4Readers familiar with boolean Fourier analysis may be reminded of the noise operator Tρ, ρ ≤ 1 (O’Donnell, 2014, Defn 2.46). In the language of this work, Tρ is a neural kernel with eigenvalues µk = ρk. 3.2 SPHERE Now let’s consider the case when X = √ dSd−1 is the radius√ d sphere in Rd equipped with the uniform measure. Again, because x ∈ X all have the same norm, we will treat Φ as a univariate function with K(x, y) = Φ(〈x, y〉/‖x‖‖y‖) = Φ(〈x, y〉/d). As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), K is diagonalized by spherical harmonics, and the eigenvalues are given by the coefficients of Φ against a system of orthogonal polynomials called Gegenbuaer polynomials. We relegate a complete review of this topic to Appendix H.2. 3.3 ISOTROPIC GAUSSIAN Now let’s consider X = Rd equipped with standard isotropic Gaussian N (0, I), so that K behaves like Kf(x) = E y∼N (0,I) K(x, y)f(y) = E y∼N (0,I) Φ ( 〈x, y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) f(y) for any f ∈ L2(N (0, I)). In contrast to the previous two sections, K will essentially depend on the effect of the norms ‖x‖ and ‖y‖ on Φ. Nevertheless, because an isotropic Gaussian vector can be obtained by sampling its direction uniformly from the sphere and its magnitude from a chi distribution, K can still be partially diagonalized into a sum of products between spherical harmonics and kernels on R equipped with a chi distribution (Thm H.14). In certain cases, we can obtain complete eigendecompositions, for example when Φ is positive homogeneous. See Appendix H.3 for more details. 3.4 KERNEL IS SAME OVER BOOLEAN CUBE, SPHERE, OR GAUSSIAN WHEN d 1 The reason we have curtailed a detailed discussion of neural kernels on the sphere and on the standard Gaussian is because, in high dimension, the kernel behaves the same under these distributions as under uniform distribution over the boolean cube. Indeed, by intuition along the lines of the central limit theorem, we expect that uniform distribution over a high dimension boolean cube should approximate high dimensional standard Gaussian. Similarly, by concentration of measure, most of the mass of a Gaussian is concentrated around a thin shell of radius √ d. Thus, morally, we expect the same kernel function K induces approximately the same integral operator on these three distributions in high dimension, and as such, their eigenvalues should also approximately coincide. We verify empirically and theoretically this is indeed the case in Appendix H.4. 3.5 COMPUTING THE EIGENVALUES As the eigenvalues of K over the different distributions are very close, we will focus in the rest of this paper on eigenvalues over the boolean cube. This has the additional benefit of being much easier to compute. Each eigenvalue over the sphere and the standard Gaussian requires an integration of Φ against a Gegenbauer polynomial. In high dimension d, these Gegenbauer polynomials varies wildly in a sinusoidal fashion, and blows up toward the boundary (see Fig. 15 in the Appendix). As such, it is difficult to obtain a numerically stable estimate of this integral in an efficient manner when d is large. In contrast, we have multiple ways of computing boolean cube eigenvalues, via Eqs. (5) and (6). In either case, we just take some linear combination of the values of Φ at a grid of points on [−1, 1], spaced apart by ∆ = 2/d. While the coefficients Cd−k,kr (defined in Eq. (7)) are relatively efficient to compute, the change in the sign of Cd−k,kr makes this procedure numerically unstable for large d. Instead, we use Eq. (5) to isolate the alternating part to evaluate in a numerically stable way: Since µk = ( I+T∆ 2 )d−k ( I−T∆ 2 )k Φ(1), we can evaluate Φ̃ def= ( I−T∆ 2 )k Φ via k finite differences, and then compute ( I + T∆ 2 )d−k Φ̃(1) = 1 2d−k d−k∑ r=0 ( d− k r ) Φ̃(1− r∆). (8) When Φ arises from the CK or the NTK of an MLP, all derivatives of Φ at 0 are nonnegative (Thm I.3). Thus intuitively, the finite difference Φ̃ should be also all nonnegative, and this sum can be evaluated without worry about floating point errors from cancellation of large terms. A slightly more clever way to improve the numerical stability when 2k ≤ d is to note that (I + T∆)d−k (I − T∆)k Φ(1) = (I + T∆)d−2k ( I − T 2∆ )k Φ(1) = (I + T∆)d−2k (I − T2∆)k Φ(1). So an improved algorithm is to first compute the kth finite difference (I − T2∆)k with the larger step size 2∆, then compute the sum (I + T∆)d−2k as in Eq. (8). 4 CLARIFYING THE “SIMPLICITY BIAS” OF RANDOM NEURAL NETWORKS As mentioned in the introduction, Valle-Pérez et al. (2018) claims that neural networks are biased toward simple functions. We show that this phenomenon depends crucially on the nonlinearity, the sampling variances, and the depth of the network. In Fig. 1(a), we have repeated their experiment for 104 random functions obtained by sampling relu neural networks with 2 hidden layers, 40 neurons each, following Valle-Pérez et al. (2018)’s architectural choices5. We also do the same for erf networks of the same depth and width, varying as well the sampling variances of the weights and biases, as shown in the legend. As discussed in Valle-Pérez et al. (2018), for relu, there is indeed this bias, where a single function gets sampled more than 10% of the time. However, for erf, as we increase σ2w, we see this bias disappear, and every function in the sample gets sampled only once. This phenomenon can be explained by looking at the eigendecomposition of the CK, which is the Gaussian process kernel of the distribution of the random networks as their hidden widths tend to infinity. In Fig. 1(b), we plot the normalized eigenvalues {µk/ ∑7 i=0 ( 7 i ) µi}7k=0 for the CKs corresponding to the networks sampled in Fig. 1(a). Immediately, we see that for relu and σ2w = σ 2 b = 2, the degree 0 eigenspace, corresponding to constant functions, accounts for more than 80% of the variance. This means that a typical infinite-width relu network of 2 layers is expected to be almost constant, and this should be even more true after we threshold the network to be a boolean function. On the other hand, for erf and σb = 0, the even degree µks all vanish, and most of the variance comes from degree 1 components (i.e. linear functions). This concentration in degree 1 also lessens as σ2w increases. But because this variance is spread across a dimension 7 eigenspace, we don’t see duplicate function samples nearly as much as in the relu case. As σw increases, we also see the eigenvalues become more equally distributed, which corresponds to the flattening of 5Valle-Pérez et al. (2018) actually performed their experiments over the {0, 1}7 cube, not the {±1}7 cube we are using here. This does not affect our conclusion. See Appendix J for more discussion the probability-vs-rank curve in Fig. 1(a). Finally, we observe that a 32-layer erf network with σ2w = 4 has all its nonzero eigenvalues (associated to odd degrees) all equal (see points marked by ∗ in Fig. 1(b)). This means that its distribution is a "white noise" on the space of odd functions, and the distribution of boolean functions obtained by thresholding the Gaussian process samples is the uniform distribution on odd functions. This is the complete lack of simplicity bias modulo the oddness constraint. However, from the spectral perspective, there is a weak sense in which a simplicity bias holds for all neural network-induced CKs and NTKs. Theorem 4.1 (Weak Spectral Simplicity Bias). Let K be the CK or NTK of an MLP on a boolean cube d. Then the eigenvalues µk, k = 0, . . . , d, satisfy µ0 ≥ µ2 ≥ · · · ≥ µ2k ≥ · · · , µ1 ≥ µ3 ≥ · · · ≥ µ2k+1 ≥ · · · . (9) Even though it’s not true that the fraction of variance contributed by the degree k eigenspace is decreasing with k, the eigenvalue themselves will be in a nonincreasing pattern across even and odd degrees. In fact, if we fix k and let d→∞, then we can show that (Thm I.6) µk = Θ(d −k). Of course, as we have seen, this is a very weak sense of simplicity bias, as it doesn’t prevent “white noise” behavior as in the case of erf CK with large σ2w and large depth. 5 DEEPER NETWORKS LEARN MORE COMPLEX FEATURES In the rest of this work, we compute the eigenvalues µk over the 128-dimensional boolean cube ( d, with d = 128) for a large number of different hyperparameters, and analyze how the latter affect the former. We vary the degree k ∈ [0, 8], the nonlinearity between relu and erf, the depth (number of hidden layers) from 1 to 128, and σ2b ∈ [0, 4]. We fix σ2w = 2 for relu kernels, but additionally vary σ2w ∈ [1, 5] for erf kernels. Comprehensive contour plots of how these hyperparameters affect the kernels are included in Appendix D, but in the main text we summarize several trends we see. We will primarily measure the change in the spectrum by the degree k fractional variance, which is just degree k fractional variance def= ( d k ) µk∑d i=0 ( d i ) µi . This terminology comes from the fact that, if we were to sample a function f from a Gaussian process with kernel K, then we expect that r% of the total variance of f comes from degree k components of f , where r% is the degree k fractional variance. If we were to try to learn a homogeneous degree-k polynomial using a kernel K, intuitively we should try to choose K such that its µk is maximized, relative to other eigenvalues. Fig. 3(a) shows that this is indeed the case even with neural networks: over a large number of different hyperparameter settings, degree k fractional variance is inversely related to the validation loss incurred when learning a degree k polynomial. However, this plot also shows that there does not seem like a precise, clean relationship between fractional variance and validation loss. Obtaining a better measure for predicting generalization is left for future work. Before we continue, we remark that the fractional variance of a fixed degree k converges to a fixed value as the input dimension d→∞: Theorem 5.1 (Asymptotic Fractional Variance). Let K be the CK or NTK of an MLP on a boolean cube d. ThenK can be expressed asK(x, y) = Φ(〈x, y〉/d) for some analytic function Φ : R→ R. If we fix k and let the input dimension d→∞, then the fractional variance of degree k converges to (k!)−1Φ(k)(0)/Φ(1) = (k!)−1Φ(k)(0)∑ j≥0(j!) −1Φ(j)(0) where Φ(k) denotes the kth derivative of Φ. For the fractional variances we compute in this paper, their values at d = 128 are already very close to their d→∞ limit, so we focus on the d = 128 case experimentally. If K were to be the CK or NTK of a relu or erf MLP, then we find that for higher k, the depth of the network helps increase the degree k fractional variance. In Fig. 2(a) and (b), we plot, for each degree k, the depth that (with some combination of other hyperparameters like σ2b ) achieves this maximum, for respectively relu and erf kernels. Clearly, the maximizing depths are increasing with k for relu, and also for erf when considering either odd k or even k only. The slightly differing behavior between even and odd k is expected, as seen in the form of Thm 4.1. Note the different scales of y-axes for relu and erf — the depth effect is much stronger for erf than relu. For relu NTK and CK, σ2b = 0 maximizes fractional variance in general, and the same holds for erf NTK and CK in the odd degrees (see Appendix D). In Fig. 2(c) and Fig. 2(d) we give a more fine-grained look at the σ2b = 0 slice, via heatmaps of fractional variance against degree and depth. Brighter color indicates higher variance, and we see the optimal depth for each degree k clearly increases with k for relu NTK, and likewise for odd degrees of erf NTK. However, note that as k increases, the difference between the maximal fractional variance and those slightly suboptimal becomes smaller and smaller, reflected by suppressed range of color moving to the right. The heatmaps for relu and erf CKs look similar and are omitted. We verify this increase of optimal depth with degree in Fig. 3(b). There we have trained relu networks of varying depth against a ground truth multilinear polynomial of varying degree. We see clearly that the optimal depth is increasing with degree. We also verify this phenomenon when the input distribution changes to the standard Gaussian or the uniform distribution over the sphere √ dSd−1; see Fig. 6. Note that implicit in our results here is a highly nontrivial observation: Past some point (the optimal depth), high depth can be detrimental to the performance of the network, beyond just the difficulty to train, and this detriment can already be seen in the corresponding NTK or CK. In particular, it’s not true that the optimal depth is infinite. We confirm the existence of such an optimal depth even in real distributions like MNIST and CIFAR10; see Fig. 7. This adds significant nuance to the folk wisdom that “depth increases expressivity and allows neural networks to learn more complex features.” 6 NTK FAVORS MORE COMPLEX FEATURES THAN CK We generally find the degree k fractional variance of NTK to be higher than that of CK when k is large, and vice versa when k is small, as shown in Fig. 4. This means that, if we train only the last layer of a neural network (i.e. CK dynamics), we intuitively should expect to learn simpler features better, while, if we train all parameters of the network (i.e. NTK dynamics), we should expect to learn more complex features better. Similarly, if we were to sample a function from a Gaussian process with the CK as kernel (recall this is just the distribution of randomly initialized infinite width MLPs (Lee et al., 2018)), this function is more likely to be accurately approximated by low degree polynomials than the same with the NTK. We verify this intuition by training a large number of neural networks against ground truth functions of various homogeneous polynomials of different degrees, and show a scatterplot of how training the last layer only measures against training all layers (Fig. 3(c)). This phenomenon remains true over the standard Gaussian or the uniform distribution on the sphere (Fig. 8). Consistent with our theory, the only place training the last layer works meaningfully better than training all layers is when the ground truth is a constant function. However, we reiterate that fractional variance is an imperfect indicator of performance. Even though for erf neural networks and k ≥ 1, degree k fractional variance of NTK is not always greater than that of the CK, we do not see any instance where training the last layer of an erf network is better than training all layers. We leave an investigation of this discrepancy to future work. 7 PREDICTING THE MAXIMUM LEARNING RATE In any setup that tries to push deep learning benchmarks, learning rate tuning is a painful but indispensable part. In this section, we show that our spectral theory can accurately predict the maximal nondiverging learning rate over real datasets as well as toy input distributions, which would help set the correct upper limit for a learning rate search. By Jacot et al. (2018), in the limit of large width and infinite data, the function g : X → R represented by our neural network evolves like gt+1 = gt − 2αK(gt − g∗), t = 0, 1, 2, . . . , (10) when trained under full batch GD (with the entire population) with L2 loss L(f, g) = Ex∼X (f(x)− g(x))2, ground truth g∗, and learning rate α, starting from randomly initialization. If we train only the last layer, then K is the CK; if we train all layers, then K is the NTK. Given an eigendecomposition of K as in Eq. (1), if g0 − g∗ = ∑ i aiui is the decomposition of g 0 in the eigenbasis {ui}i, then one can easily deduce that gt − g∗ = ∑ i ai(1− 2αλi)tui. Consequently, we must have α < (maxi λi) −1 in order for Eq. (10) to converge 6 When the input distribution is the uniform distribution over d, the maximum learning rate is max(µ0, µ1) by Thm 4.1. By Thm 5.1, as long as the Φ function corresonding to K has Φ(0) 6= 0, when d is large, we expect µ0 ≈ Φ(0) but µ1 ∼ d−1Φ′(0) µ0. Therefore, we should predict 1Φ(0) for the maximal learning rate when training on the boolean cube. However, as Fig. 5 shows, this prediction is accurate not only for the boolean cube, but also over the sphere, the standard Gaussian, and even MNIST and CIFAR10! 8 CONCLUSION In this work, we have taken a first step at studying how hyperparameters change the initial distribution and the generalization properties of neural networks through the lens of neural kernels and their spectra. We obtained interesting insights by computing kernel eigenvalues over the boolean cube and relating them to generalization through the fractional variance heuristic. While it inspired valid predictions that are backed up by experiments, fractional variance is clearly just a rough indicator. We hope future work can refine on this idea to produce a much more precise prediction of test loss. Nevertheless, we believe the spectral perspective is the right line of research that will not only shed light on mysteries in deep learning but also inform design choices in practice. A RELATED WORKS The Gaussian process behavior of neural networks was found by Neal (1995) for shallow networks and then extended over the years to different settings and architectures (Williams, 1997; Le Roux and Bengio, 2007; Hazan and Jaakkola, 2015; Daniely et al., 2016; Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018). This connection was exploited implicitly or explicitly to build new models (Cho and Saul, 2009; Lawrence and Moore, 2007; Damianou and Lawrence, 2013; Wilson et al., 2016a;b; Bradshaw et al., 2017; van der Wilk et al., 2017; Kumar et al., 2018; Blomqvist et al., 2018; Borovykh, 2018; Garriga-Alonso et al., 2018; Novak et al., 2018; Lee et al., 2018). The Neural Tangent Kernel is a much more recent discovery by Jacot et al. (2018) and later Allen-Zhu et al. (2018a;c;b); Du et al. (2018); Arora et al. (2019b); Zou et al. (2018) came upon the same reasoning independently. Like CK, NTK has also been applied toward building new models or algorithms (Arora et al., 2019a; Achiam et al., 2019). Closely related to the discussion of CK and NTK is the signal propagation literature, which tries to understand how to prevent pathological behaviors in randomly initialized neural networks when they are deep (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017; 2018; Hanin, 2018; Hanin and Rolnick, 2018; Chen et al., 2018; Yang et al., 2019; Pennington et al., 2017a; Hayou et al., 2018; Philipp and Carbonell, 2018). This line of work can trace its original at least to the advent of the Glorot and He initialization schemes for deep networks (Glorot and Bengio, 2010; He et al., 2015). The investigation of forward signal propagation, or how random neural networks change with depth, corresponds to studying the infinite-depth limit of CK, and the investigation of backward signal propagation, or how gradients of random networks change with depth, corresponds to studying the infinite-depth limit of NTK. Some of the quite remarkable results from this literature includes how to train a 10,000 layer CNN (Xiao et al., 2017) and that, counterintuitively, batch normalization causes gradient explosion (Yang et al., 2019). This signal propagation perspective can be refined via random matrix theory (Pennington et al., 2017a; 2018). In these works, free probability is leveraged to compute the singular value distribution of the input-output map given by the random neural network, as the input dimension and width tend to infinity together. Other works also investigate various questions of neural network training and generalization from the random matrix perspective (Pennington and Worah, 2017; Pennington and Bahri, 2017; Pennington and Worah, 2018). Yang (2019) presents a common framework, known as Tensor Programs, unifying the GP, NTK, signal propagation, and random matrix perspectives, as well as extending them to new scenarios, like recurrent neural networks. It proves the existence of and allows the computation of a large number of infinite-width limits (including ones relevant to the above perspectives) by expressing the quantity of interest as the output of a computation graph and then manipulating the graph mechanically. Several other works also adopt a spectral perspective on neural networks (Candès, 1999; Sonoda and Murata, 2017; Eldan and Shamir, 2016; Barron, 1993; Xu et al., 2018; Zhang et al., 2019; Xu et al., 2019; Xu, 2018); here we highlight a few most relevant to us. Rahaman et al. (2018) studies the real Fourier frequencies of relu networks and perform experiments on real data as well as synthetic ones. They convincingly show that relu networks learn low frequencies components first. They also investigate the subtleties when the data manifold is low dimensional and embedded in various ways in the input space. In contrast, our work focuses on the spectra of the CK and NTK (which indirectly informs the Fourier frequencies of a typical network). Nevertheless, our results are complementary to theirs, as they readily explain the low frequency bias in relu that they found. Karakida et al. (2018) studies the spectrum of the Fisher information matrix, which share the nonzero eigenvalues with the NTK. They compute the mean, variance, and maximum of the eigenvalues Fisher eigenvalues (taking the width to infinity first, and then considering finite amount of data sampled iid from a Gaussian). In comparison, our spectral results yield all eigenvalues of the NTK (and thus also all nonzero eigenvalues of the Fisher) as well as eigenfunctions. Finally, we note that several recent works (Xie et al., 2016; Bietti and Mairal, 2019; Basri et al., 2019; Ghorbani et al., 2019) studied one-hidden layer neural networks over the sphere, building on Smola et al. (2001)’s observation that spherical harmonics diagonalize dot product kernels, with the latter two concurrent to us. This is in contrast to the focus on boolean cube here, which allows us to study the fine-grained effect of hyperparameters on the spectra, leading to a variety of insights into neural networks’ generalization properties. B UNIVERSALITY OF OUR BOOLEAN CUBE OBSERVATIONS IN OTHER INPUT DISTRIBUTIONS Using the spectral theory we developed in this paper, we made three observations, that can be roughly summarized as follows: 1) the simplicity bias noted by Valle-Pérez et al. (2018) is not universal; 2) for each function of fixed “complexity” there is an optimal depth such that networks shallower or deeper will not learn it as well; 3) training last layer only is better than training all layers when learning “simpler” features, and the opposite is true for learning “complex” features. In this section, we discuss the applicability of these observations to distributions that are not uniform over the boolean cube: in particular, the uniform distribution over the sphere √ dSd−1, the standard Gaussian N (0, Id), as well as realistic data distributions such as MNIST and CIFAR10. Simplicity bias The simplicity bias noted by Valle-Pérez et al. (2018), in particular Fig. 1, depends on the finiteness of the boolean cube as a domain, so we cannot effectively test this on the distributions above, which all have uncountable support. Optimal depth With regard to the second observation, we can test whether an optimal depth exists for learning functions over the distributions above. Since polynomial degrees remain the natural indicator of complexity for the sphere and the Gaussian (see Appendices H.2 and H.3 for the relevant spectral theory), we replicated the experiment in Fig. 3(b) for these distributions, using the same ground truth functions of polynomials of different degrees. The results are shown in Fig. 6. We see the same phenomenon as in the boolean cube case, with an optimal depth for each degree, and with the optimal depth increasing with degree. For MNIST and CIFAR10, the notion of “feature complexity” is less clear, so we will not test the hypothesis that “optimal depth increases with degree” for these distributions but only test for the existence of the optimal depth for the ground truth marked by the labels of the datasets. We do so by training a large number of MLPs of varying depth on these datasets until convergence, and plot the results in Fig. 7. This figure clearly shows that such an optimal depth exists, such that shallower or deeper networks do monotonically worse as the depth diverge away from this optimal depth. Again, the existence of the optimal depth is not obvious at all, as conventional deep learning wisdom would have one believe that adding depth should always help. Training last layer only vs training all layers Finally, we repeat the experiment in Fig. 3(c) for the sphere and the standard Gaussian, with polynomials of different degrees as ground truth functions. The results are shown in Fig. 8. We see the same phenomenon as in the boolean cube case: for degree 0 polynomials, training last layer works better in general, but for higher degree polynomials, training all layers fares better. Note that, unlike the sphere and the Gaussian, whose spectral theory tells us that (harmonic) polynomial degree is a natural notion of complexity, for MNIST and CIFAR10 we have much less clear idea of what a “complex” or a “simple” feature is. Therefore, we did not attempt a similar experiment on these datasets. C THEORETICAL VS EMPIRICAL MAX LEARNING RATES UNDER DIFFERENT PREPROCESSING FOR MNIST AND CIFAR10 In the main text Fig. 5, on the MNIST and CIFAR10 datasets, we preprocessed the data by centering and normalizing to the sphere (see Appendix E.2 for a precise description). With this preprocessing, our theory accurately predicts the max learning rate in practice. In general, if we go by another preprocessing, such as PCA or ZCA, or no preprocessing, our theoretical max learning rate 1/Φ(0) is less accurate but still correlated in general. The only exception seems to be relu networks on PCA- or ZCA- preprocessed CIFAR10. See Fig. 9. Theoretical vs empirical max learning rate under different preprocessing D VISUALIZING THE SPECTRAL EFFECTS OF σ2w, σ 2 b , AND DEPTH While in the main text, we summarized several trends of interest kn several plots, they do not give the entire picture of how eigenvalues and fractional variances vary with σ2w, σ 2 b , and depth. Here we try to present this relationship more completely in a series of contour plots. Fig. 10 shows how varying depth and σ2b changes the fractional variances of each degree, for relu CK and NTK. We are fixing σ2w = 2 in the CK plots, as the fractional variances only depend on the ratio σ 2 b/σ 2 w; even though this is not true for relu NTK, we fix σ2w = 2 as well for consistency. For erf, however, the fractional variance will crucially depend on both σ2w and σ 2 b , so we present 3D contour plots of how σ 2 w, σ 2 b , and depth changes fractional variance in Fig. 13. Complementarily, we also show in Figs. 11 and 12 a few slices of these 3D contour plots for different fixed values of σ2b , for erf CK and NTK. E EXPERIMENTAL DETAILS E.1 FIG. 3 Fig. 3(a), (b) and (c) differ in the set of hyperparameters they involve (to be specified below), but in all of them, we train relu networks against a randomly generated ground truth multilinear polynomial, with input space 128 and L2 loss L(f) = Ex∈ d(f(x)− f∗(x))2. Training We perform SGD with batch size 1000. In each iteration, we freshly sample a new batch, and we train for a total of 100,000 iterations, so the network potentially sees 108 different examples. At every 1000 iterations, we validate the current network on a freshly drawn batch of 10,000 examples. We thus record a total of 100 validation losses, and we take the lowest to be the “best validation loss.” Generating the Ground Truth Function The ground truth function f∗(x) is generated by first sampling 10 monomials m1, . . . ,m10 of degree k, then randomly sampling 10 coefficients a1, . . . , a10 for them. The final function is obtained by normalizing {ai} such that the sum of their squares is 1: f∗(x) def = 10∑ i=1 aimi/ 10∑ j=1 a2j . (11) Hyperparameters for Fig. 3(a) • The learning rate is half the theoretical maximum learning rate7 12 max(µ0, µ1) −1 • Ground truth degree k ∈ {0, 1, 2, 3} • Depth ∈ {0, . . . , 10} • activation = relu • σ2w = 2 • σ2b = 0 • width = 1000 • 10 random seeds per hyperparameter combination • training last layer (marked “ck”), or all layers (marked “ntk”). In the latter case, we use the NTK parametrization of the MLP (Eq. (MLP)). Hyperparameters for Fig. 3(b) • The learning rate is half the theoretical maximum learning rate 12 max(µ0, µ1) −1 • Ground truth degree k ∈ {0, 1, 2, 3} • Depth ∈ {0, . . . , 10} • activation = relu • σ2w = 2 • σ2b = 0 • width = 1000 • 100 random seeds per hyperparameter combination • training last layer weight and bias only 7Note that, because the L2 loss here is L(f) = Ex∈ d(f(x) − f∗(x))2, the maximum learning rate is λ−1max = max(µ0, µ1) −1 (see Thm 4.1). If we instead adopt the convention L(f) = Ex∈ d 12 (f(x)− f ∗(x))2, then the maximum learning rate would be 2λ−1max = 2max(µ0, µ1)−1 Algorithm 1 Binary Search for Empirical Max Learning Rate upper ← 16× theoretical max lr lower ← 0 tol← 0.01× theoretical max lr while |upper − lower| > tol do α← (upper + lower)/2 Run SGD with learning rate α for 1000 iterations if loss diverges then upper ← α else lower ← α end if end while Output: upper Hyperparameters for Fig. 3(c) • The learning rate ∈ {0.05, 0.1, 0.5} • Ground truth degree k ∈ {0, 1, . . . , 6} • Depth ∈ {1, . . . , 5} • activation ∈ {relu, erf} • σ2w = 2 for relu, but σ2w ∈ {1, 2, . . . , 5} for erf • σ2b ∈ {0, 1, . . . , 4} • width = 1000 • 1 random seed per hyperparameter combination • Training all layers, using the NTK parametrization of the MLP (Eq. (MLP)) E.2 MAX LEARNING RATE EXPERIMENTS Here we describe the experimental details for the experiments underlying Figs. 5 and 9. Theoretical max learning rate For a fixed setup, we compute Φ according to Eq. (CK) (if only last layer is trained) or Eq. (NTK) (if all layers are trained). For ground truth problems where the output is n-dimensional, the theoretical max learning rate is nΦ(0)−1; in particular, the max learning rates for MNIST and CIFAR10 are 10 times those for boolean cube, sphere, and Gaussian. This is because the kernel for an multi-output problem effectively becomes 1 n K⊕n = 1 n K 0 0 0 . . . 0 0 0 K where the 1n factor is due to the 1 n factor in the scaled square loss L(f, f ∗) = Ex∼X 1n ∑n i=1(f(x)i− f∗(x)i) 2. The top eigenvalue for 1nK ⊕n is just 1n times the top eigenvalue for K. Empirical max learning rate For a fixed setup, we perform binary search for the empirical max learning rate as in Algorithm 1. Preprocessing In Fig. 5, for MNIST and CIFAR10, we center and project each image onto the sphere √ dSd−1, where d = 28× 28 = 784 for MNIST and d = 3× 32× 32 = 3072 for CIFAR10. More precisely, we compute the average image x̄ over the entire dataset, and we preprocess each image x as √ d x−x̄‖x−x̄‖ . In Fig. 9, there are three different preprocessing schemes. For “no preprocessing,” we load the MNIST and CIFAR10 data as is. In “PCA128,” we take the top 128 eigencomponents of the data, so that the data has only 128 dimensions. In “ZCA128,” we take the top 128 eigencomponents but rotate it back to the original space, so that the data still has dimension d, where d = 28×28 = 784 for MNIST and d = 3× 32× 32 = 3072 for CIFAR10. Hyperparameters • Target function: For boolean cube, sphere, and standard Gaussian, we randomly sample a degree 1 polynomial as in Eq. (11). For MNIST and CIFAR10, we just use the label in the dataset, encoded as a one-hot vector for square-loss regression. • Depth ∈ {1, 2, 4, 8, 16} • activation ∈ {relu, erf} • σ2w = 2 for relu, but σ2w ∈ {1, 2, . . . , 5} for erf • σ2b ∈ {1, . . . , 4} • width = 1000 • 1 random seed per hyperparameter combination • Training last layer (CK) or all layers (NTK). In the latter case, we use the NTK parametriza- tion of the MLP (Eq. (MLP)). F REVIEW OF THE THEORY OF NEURAL TANGENT KERNELS F.1 CONVERGENCE OF INFINITE-WIDTH KERNELS AT INITIALIZATION Conjugate Kernel Via a central-limit-like intuition, each unit hl(x)α of Eq. (MLP) should behave like a Gaussian as width nl−1 →∞, as it is a sum of a large number of roughly independent random variables (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017). The devil, of course, is in what “roughly independent” means and how to apply the central limit theorem (CLT) to this setting. It can be done, however, (Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018), and in the most general case, using a “Gaussian conditioning” technique, this result can be rigorously generalized to almost any architecture Yang (2019). In any case, the consequence is that, for any finite set S ⊆ X , {hlα(x)}x∈S converges in distribution to N (0,Σl(S, S)), as min{n1, . . . , nl−1} → ∞, where Σl is the CK as given in Eq. (CK). Neural Tangent Kernel By a slightly more involved version of the “Gaussian conditioning” technique, Yang (2019) also showed that, for any x, y ∈ X , 〈∇θhL(x),∇θhL(y)〉 converges almost surely to ΘL(x, y) as the widths tend to infinity, where Θl is the NTK as given in Eq. (NTK). F.2 FAST EVALUATIONS OF CK AND NTK For certain φ like relu or erf, Vφ and V′φ can be evaluated very quickly, so that both the CK and NTK can be computed in O(|X |2L) time, where X is the set of points we want to compute the kernel function over, and L is the number of layers. Fact F.1 (Cho and Saul (2009)). For any kernel K Vrelu(K)(x, x ′) = 1 2π ( √ 1− c2 + (π − arccos c)c) √ K(x, x)K(x′, x′) V′relu(K)(x, x ′) = 1 2π (π − arccos c) where c = K(x, x′)/ √ K(x, x)K(x′, x′). Fact F.2 (Neal (1995)). For any kernel K, Verf(K)(x, x ′) = 2 π arcsin K(x, x′)√ (K(x, x) + 0.5)(K(x′, x′) + 0.5) V′erf(K)(x, x ′) = 4 π √ (1 + 2K(x, x))(1 + 2K(x′, x′))− 4K(x, x′)2 . Fact F.3. Let φ(x) = exp(x/σ) for some σ > 0. For any kernel K, Vφ(K)(x, x ′) = exp ( K(x, x) + 2K(x, x′) +K(x′, x′) 2σ2 ) . F.3 LINEAR EVOLUTION OF NEURAL NETWORK UNDER GD Remarkably, the NTK governs the evolution of the neural network function under gradient descent in the infinite-width limit. First, let’s consider how the parameters θ and the neural network function f evolve under continuous time gradient flow. Suppose f is only defined on a finite input space X = {x1, . . . , xk}. We will visualize f(X ) = f(x1) ... f(xk) , ∇fL = ∂L ∂f(x1) ... ∂L ∂f(xk) , θ = θ1 ... θn , ∇θf = ∂f(x1) ∂θ1 · · · ∂f(x k) ∂θ1 ... . . . ... ∂f(x1) ∂θn · · · ∂f(x k) ∂θn (best viewed in color). Then under continuous time gradient descent with learning rate η, ∂t θt = −η∇θL(ft) = −η ∇θft · ∇fL(ft) , ∂t ft = ∇θft > · ∂t θt = −η ∇θft > · ∇θft · ∇fL(ft) = −η Θt · ∇fL(ft) (12) where Θt = ∇θf>t · ∇θft ∈ Rk×k is of course the (finite width) NTK. These equations can be visualized as ∂t = −η · , ∂t = · ∂t = −η · · = −η · Thus f undergoes kernel gradient descent with (functional) loss L(f) and kernel Θt. This kernel Θt of course changes as f evolves, but remarkably, it in fact stays constant for f being an infinitely wide MLP (Jacot et al., 2018): ∂tft = −ηΘ · ∇fL(ft), (Training All Layers) where Θ is the infinite-width NTK corresponding to f . A similar equation holds for the CK Σ if we train only the last layer, ∂tft = −ηΣ · ∇fL(ft). (Training Last Layer) If L is the square loss against a ground truth function f∗, then ∇fL(ft) = 12k∇f‖ft − f ∗‖2 = 1 k (ft−f ∗), and the equations above become linear differential equations. However, typically we only have a training set X train ⊆ X of size far less than |X |. In this case, the loss function is effectively L(f) = 1 2|X train| ∑ x∈X train (f(x)− f∗(x))2, with functional gradient ∇fL(f) = 1 |X train| Dtrain · (f − f∗), where Dtrain is a diagonal matrix of size k × k whose diagonal is 1 on x ∈ X train and 0 else. Then our function still evolves linearly ∂tft = −η(K ·Dtrain) · (ft − f∗) (13) where K is the CK or the NTK depending on which parameters are trained. F.4 RELATIONSHIP TO GAUSSIAN PROCESS INFERENCE. Recall that the initial f0 in Eq. (13) is distributed as a Gaussian process N (0,Σ) in the infinite width limit. As Eq. (13) is a linear differential equation, the distribution of ft will remain a Gaussian process for all t, whether K is CK or NTK. Under suitable conditions, it can be shown that (Lee et al., 2019), in the limit as t→∞, if we train only the last layer, then the resulting function f∞ is distributed as a Gaussian process with mean f̄∞ given by f̄∞(x) = Σ(x,X train)Σ(X train,X train)−1f∗(X train) and kernel Var f∞ given by Var f∞(x, x ′) = Σ(x, x′)− Σ(x,X train)Σ(X train,X train)−1Σ(X train, x′). These formulas precisely described the posterior distribution of f given prior N (0,Σ) and data {(x, f∗(x))}x∈X train . If we train all layers, then similarly as t→∞, the function f∞ is distributed as a Gaussian process with mean f̄∞ given by (Lee et al., 2019) f̄∞(x) = Θ(x,X train)Θ(X train,X train)−1f∗(X train). This is, again, the mean of the Gaussian process posterior given prior N (0,Θ) and the training data {(x, f∗(x))}x∈X train . However, the kernel of f∞ is no longer the kernel of this posterior, but rather is an expression involving both the NTK Θ and the CK Σ; see Lee et al. (2019). In any case, we can make the following informal statement in the limit of large width Training the last layer (resp. all layers) of an MLP infinitely long, in expectation, yields the mean prediction of the GP inference given prior N (0,Σ) (resp. N (0,Θ)). G A BRIEF REVIEW OF HILBERT-SCHMIDT OPERATORS AND THEIR SPECTRAL THEORY In this section, we briefly review the theory of Hilbert-Schmidt kernels, and more importantly, to properly define the notion of eigenvalues and eigenfunctions. A function K : X 2 → R is called a Hilbert-Schmidt operator if K ∈ L2(X × X ), i.e. ‖K‖2HS def = E x,y∼X K(x, y)2 <∞. ‖K‖2HS is known as the Hilbert-Schmidt norm of K. K is called symmetric if K(x, y) = K(y, x) and positive definite (resp. semidefinite) if E x,y∼X f(x)K(x, y)f(y) > 0 (resp. ≥ 0) for all f ∈ L2(X ) not a.e. zero. A spectral theorem (Mercer’s theorem) holds for Hilbert-Schmidt operators. Fact G.1. If K is a symmetric positive semidefinite Hilbert-Schmidt kernel, then there is a sequence of scalars λi ≥ 0 (eigenvalues) and functions fi ∈ L2(X ) (eigenfunctions), for i ∈ N, such that ∀i, j, 〈fi, fj〉 = I(i = j), and K(x, y) = ∑ i∈N λifi(x)fi(y) where the convergence is in L2(X × X ) norm. This theorem allows us to speak of the eigenfunctions and eigenvalues, which are important for training and generalization considerations when K is a kernel used in machine learning, as discussed in the main text. A sufficient condition for K to be a Hilbert-Schmidt kernel in our case (concerning only probability measure on X ) is just that K is bounded. All Ks in this paper satisfy this property. H EIGENDECOMPOSITION OF NEURAL KERNEL ON DIFFERENT DOMAINS H.1 BOOLEAN CUBE From the Fourier Series Perspective. We continue from the discussion of the boolean cube in the main text. Recall that T∆ is the shift operator on functions that sends Φ(·) to Φ(· −∆). Notice that, if we let Φ(t) = eκt for some κ ∈ C, then T∆Φ(s) = e−κ∆ · eκt. Thus Φ is an “eigenfunction” of the operator T∆ with eigenvalue e−κ∆. In particular, this implies that Proposition H.1. Suppose Φ(t) = et/σ 2 , as in the case whenK is the CK or NTK of a 1-layer neural network with nonlinearity exp(·/σ), up to multiplicative constant (Fact F.3). Then the eigenvalue µk over the boolean cube d equals µk = 2 −d(1− exp(−∆/σ2))k(1 + exp(−∆/σ2))d−k · exp(1/σ2) where ∆ = 2/d. It would be nice if we can express any Φ as a linear combination of exponentials, so that Eq. (5) simplifies in the fashion of Prop H.1 — this is precisely the idea of Fourier series. We will use the theory of Fourier analysis on the circle, and for this we need to discuss periodic functions. Let Φ̃ : [−2, 2]→ R be defined as Φ̃(x) = Φ(x) if x ∈ [−1, 1] Φ(2− x) if x ∈ [1, 2] Φ(−2− x) if x ∈ [−2,−1]. See Fig. 14 for an example illustration. Note that if Φ is continuous on [−1, 1], then Φ̃ is continuous as a periodic function on [−2, 2]. The Fourier basis on functions over [−2, 2] is the collection {t 7→ e 12πist}s∈Z. Under generic conditions (for example if Ψ ∈ L2[−2, 2]), a function Ψ has an associated Fourier series ∑ s∈Z Ψ̂(s)e 1 2πist. We briefly review basic facts of Fourier analysis on the circle. Recall the following notion of functions of bounded variation. Definition H.2. A function f : [a, b]→ R is said to have bounded variation if sup P nP−1∑ i=0 |f(xi+1)− f(xi)| <∞, where the supremum is taken over all partitions P of the interval [a, b], P = {x0, . . . , xnP }, x0 ≤ x1 ≤ · · · ≤ xnP . Intuitively, a function of bounded variation has a graph (in [a, b]× R) of finite length. Fact H.3 (Katznelson (2004)). A bounded variation function f : [−2, 2]→ R that is periodic (i.e. f(−2) = f(2)) has a pointwise-convergent Fourier series: lim T→∞ ∑ s∈[−T,T ] Ψ̂(s)e 1 2πist → Ψ(t), ∀t ∈ [−2, 2]. From this fact easily follows the following lemma. Lemma H.4. Suppose Φ is continuous and has bounded variation on [−1, 1]. Then Φ̃ is also continuous and has bounded variation, and its Fourier Series (on [−2, 2]) converges pointwise to Φ̃. Proof. Φ̃ is obviously continuous and has bounded variation as well, and from Fact H.3, we know a periodic continuous function with bounded variation has a pointwise-convergent Fourier Series. Certainly, T∆ sends continuous bounded variation functions to continuous bounded variation functions. Because T∆e 1 2πist = e− 1 2πis∆e 1 2πist, T∆ ∑ s∈Z Ψ̂(s)e 1 2πist = ∑ s∈Z Ψ̂(s)e− 1 2πis∆e 1 2πist whenever both sides are well defined. If Ψ is continuous and has bounded variation then T∆Ψ is also continuous and has bounded variation, and thus its Fourier series, the RHS above, converges pointwise to T∆Ψ. Now, observe (I − T∆)k(I + T∆)d−kΦ̃(x) = d∑ r=0 Cd−k,kr Φ̃ (x− r∆) (I − T∆)k(I + T∆)d−kΦ̃(1) = d∑ r=0 Cd−k,kr Φ (( d 2 − r ) ∆ ) = µk Expressing the LHS in Fourier basis, we obtain Theorem H.5. µk = ∑ s∈Z is(1− e− 12πis∆)k(1 + e− 12πis∆)d−k ˆ̃Φ(s) where ˆ̃Φ(s) = 1 4 ∫ 2 −2 Φ̃(t)e− 1 2πist dt = 1 4 ∫ 1 −1 Φ(t)(e− 1 2πist + (−1)se 12πist) dt = { 1 2 ∫ 1 −1 Φ(t) cos( 1 2πst) dt if s is even − i2 ∫ 1 −1 Φ(t) sin( 1 2πst) dt if s is odd denote the Fourier coefficients of Φ̃ on [−2, 2]. (Here i is the imaginary unit here, not an index). Recovering the values of Φ given the eigenvalues µ0, . . . , µd. Conversely, given eigenvalues µ0, . . . , µd corresponding to each monomial degree, we can recover the entries of the matrix K. Theorem H.6. For any x, y ∈ d with Hamming distance r, K(x, y) = Φ (( d 2 − r ) ∆ ) = d∑ k=0 Cd−r,rk µk, where Cd−r,rk = ∑ j=0(−1)k+j ( d−r j )( r k−j ) as in Eq. (7). Proof. Recall that for any S ⊆ [d], χS(x) = xS is the Fourier basis corresponding to S (see Eq. (3)). Then by converting from the Fourier basis to the regular basis, we get Φ (( d 2 − r ) ∆ ) = K(x, y) for any x, y ∈ d with Hamming distance r = d∑ k=0 µk ∑ |S|=k χS(x)χS(y). If x and y differ on a set T ⊆ [d], then we can simplify the inner sum Φ (( d 2 − r ) ∆ ) = d∑ k=0 µk ∑ |S|=k (−1)|S∩T | = d∑ k=0 µkC d−r,r k . Remark H.7. If we let T be the operator that sends µ• 7→ µ•+1, then we have the following operator expression Φ (( d 2 − r ) ∆ ) = [(1 + T )d−r(1− T )rµ]0 Remark H.8. The above shows that the matrix C = {Cd−r,rk }dk,r=0 satisfies C2 = 2dI. H.2 SPHERE Now let’s consider the case when X = √ dSd−1 is the radius√ d sphere in Rd equipped with the uniform measure. Again, because x ∈ X all have the same norm, we will consider Φ as a univariate function with K(x, y) = Φ(〈x, y〉/‖x‖‖y‖) = Φ(〈x, y〉/d). As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), K is diagonalized by spherical harmonics. We review these results briefly below, as we will build on them to deduce spectral information of K on isotropic Gaussian distributions. Review: spherical harmonics and Gegenbauer polynomials. Spherical harmonics are L2 functions on Sd−1 that are eigenfunctions of the Laplace-Beltrami operator ∆Sd−1 of Sd−1. They can be described as the restriction of certain homogeneous polynomials in Rd to Sd−1. Denote byHd−1,(l) the space of spherical harmonics of degree l on sphere Sd−1. Then we have the orthogonal decomposition L2(Sd−1) ∼= ⊕∞ l=0Hd−1,(l). It is a standard fact that dimHd−1,(l) = ( d−1+l d−1 ) − ( d−3+l d−1 ) . There is a special class of spherical harmonics called zonal harmonics that can be represented as x 7→ p(〈x, y〉) for specific polynomials p : R→ R, and that possess a special reproducing property which we will describe shortly. Intuitively, the value of any zonal harmonics only depends on the “height” of x along some fixed axis y, so a typical zonal harmonics looks like Fig. 16. The polynomials pmust be one of the Gegenbauer polynomials. Gegenbauer polynomials {C(α)l (t)}∞l=0 are orthogonal polynomials with respect to the measure (1− t2)α− 12 on [−1, 1] (see Fig. 15 for examples), and here we adopt the convention that∫ 1 −1 C(α)n (t)C (α) l (t)(1− t 2)α− 1 2 dt = π21−2αΓ(n+ 2α) n!(n+ α)[Γ(α)]2 I(n = l). (14) Then for each (oriented) axis y ∈ Sd−1 and degree l, there is a unique zonal harmonic Zd−1,(l)y ∈ Hd−1,(l), Zd−1,(l)y (x) def = c−1d,lC ( d−22 ) l (〈x, y〉) for any x, y ∈ Sd−1, where cd,l = d−2d+2l−2 . Very importantly, they satisfy the following Fact H.9 (Reproducing property (Suetin)). For any f ∈ Hd−1,(m), E z∼Sd−1 Zd−1,(l)y (z)f(z) = f(y)I(l = m) E z∼Sd−1 Zd−1,(l)y (z)Z d−1,(m) x (z) = Z d−1,(l) y (x)I(l = m) = c −1 d,lC ( d−22 ) l (〈x, y〉)I(l = m) We also record a useful fact about Gegenbauer polynomials. Fact H.10 (Suetin). C (α) l (±1) = (±1) l ( l + 2α− 1 l ) By a result of Schoenberg (1942), we have the following eigendecomposition of K on the sphere. Theorem H.11 (Schoenberg). Suppose Φ : [−1, 1]→ R is in L2((1− t2) d−12 −1), so that it has the Gegenbauer expansion Φ(t) a.e. = ∞∑ l=0 alc −1 d,lC ( d−22 ) l (t). Then K has eigenspaces Hd−1,(l)√ d def = {f(x/ √ d) : f ∈ Hd−1,(l)} with corresponding eigenval- ues al.
1. What are the main contributions and findings of the paper regarding conjugate kernel and neural tangent kernels? 2. How do the authors develop spectral theory for CK and NTK on boolean cube, spheres, and Gaussian distributions? 3. What are some interesting empirical observations clarified by the authors using their developed tools? 4. How does the paper contribute to understanding generalization in deep neural networks? 5. What are the strengths and weaknesses of the paper's theoretical analysis and results?
Review
Review Updates: Thanks for the updates. I find the new theoretical results interesting and potentially useful, which shows, in the large $d$ setting, spectrums of CKs/NTKs for boolean cube, sphere and isotropic Gaussian are closed to each other in some sense. Thus, I raise my score to weakly accepted but lower down my confidence level since I am not that familiar with Boolean cube literature. ------------------------------------------------------ The study of extremely over-parameterized networks (i.e. infinitely width networks) has become one of the most active research directions in theory deep learning. The key objects in understanding such networks are the conjugate kernel [1, 2] (CK defined in the paper) and the Neural tangent kernels [3] (NTK). The CK characterizes how the network looks like at initialization (connection to Gaussian processes as well) and the NTK is very useful to characterize the gradient descent training dynamics of large width networks in the kernel regime. Understanding properties of such kernels, in particular, their spectra distribution and eigenspace, could be potentially an important step towards a finer-gained understanding of generalization in neural networks. The main contribution of this paper is the development of the spectral theory of CK and NTK on boolean cube (similar or weaker results on uniform distribution in spheres and Gaussian distribution in R^n). More precisely, the authors show that, over the space of boolean cube, the CK/NTK could be diagonalized using the Fourier basis and the eigenvalues depend only on the frequency (i.e. the degree of the monomials); Thm 3.1. The authors also develop some computation tools to compute the spectra; Lemma 3.2. Using the tools developed in this paper, the authors are able to clarify some of the interesting observations found by other researchers. Most noticeably, the authors show that the observation in [4] 'neural network is biased towards simple functions' is NOT universal. Whether this statement is correct or not depends heavily on the choice of activation function (e.g. Relu v.s. Erf) and hyper-parameters (e.g. weight variance, depths). There are also some other interesting empirical findings: the optimal depth of a neural network depends on the complexity (i.e. degree in the boolean cube setting) of the function to learn, CK (i.e. training only the last layer) tends to be more useful for learning less complex functions, etc. Overall, this is a nice paper. I am leaning for a weakly accept. [1] Amit Daniely, Roy Frostig, and Yoram Singer. Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity. arXiv:1602.05897 [cs, stat], February 2016. [2] Jaehoon Lee, Yasaman Bahri, Roman Novak, Sam Schoenholz, Jeffrey Pennington, and Jascha Sohl-dickstein. Deep Neural Networks as Gaussian Processes. In International Conference on Learning Representations, 2018. [3] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. arXiv:1806.07572 [cs, math, stat], June 2018. 00000 [4] Guillermo Valle-Pérez, Chico Q. Camargo, and Ard A. Louis. Deep learning generalizes because the parameter-function map is biased towards simple functions. arXiv:1805.08522 [cs, stat], May 2018.
ICLR
Title A Fine-Grained Spectral Perspective on Neural Networks Abstract Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? How to set the range for learning rate tuning? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the Conjugate Kernel, CK, (also called the Neural Network-Gaussian Process Kernel), and the Neural Tangent Kernel, NTK. Roughly, the CK and the NTK tell us respectively “what a network looks like at initialization” and “what a network looks like during and after training.” Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at github.com/jxVmnLgedVwv6mNcGCBy/NNspectra. 1 INTRODUCTION Understanding the behavior of neural networks and why they generalize has been a central pursuit of the theoretical deep learning community. Recently, Valle-Pérez et al. (2018) observed that neural networks have a certain “simplicity bias” and proposed this as a solution to the generalization question. One of the ways with which they argued that this bias exists is the following experiment: they drew a large sample of boolean functions by randomly initializing neural networks and thresholding the output. They observed that there is a bias toward some "simple" functions which get sampled disproportionately more often. However, their experiments were only done for relu networks. Can one expect this “simplicity bias” to hold universally, for any architecture? A priori, this seems difficult, as the nonlinear nature seems to present an obstacle in reasoning about the distribution of random networks. However, this question turns out to be more easily treated if we allow the width to go to infinity. A long line of works starting with Neal (1995) and extended recently by Lee et al. (2018); Novak et al. (2018); Yang (2019) have shown that randomly initialized, infinite-width networks are distributed as Gaussian processes. These Gaussian processes also describe finite width random networks well (Valle-Pérez et al., 2018). We will refer to the corresponding kernels as the Conjugate Kernels (CK), following the terminology of Daniely et al. (2016). Given the CK K, the simplicity bias of a wide neural network can be read off quickly from the spectrum of K: If the largest eigenvalue of K accounts for most of trK, then a typical random network looks like a function from the top eigenspace of K. In this paper, we will use this spectral perspective to probe not only the simplicity bias, but more generally, questions regarding how hyperparameters affect the generalization of neural networks. Via the usual connection between Gaussian processes and linear models with features, the CK can be thought of as the kernel matrix associated to training only the last layer of a wide randomly initialized network. It is a remarkable recent advance (Jacot et al., 2018; Allen-Zhu et al., 2018a;c; Du et al., 2018) that, under a certain regime, a wide neural network of any depth evolves like a linear model even when training all parameters. The associated kernel is call the Neural Tangent Kernel, which is typically different from CK. While its theory was initially derived in the infinite width setting, Lee et al. (2019) confirmed with extensive experiment that this limit is predictive of finite width neural networks as well. Thus, just as the CK reveals information about what a network looks like at initialization, NTK reveals information about what a network looks like after training. As such, if we can understand how hyperparameters change the NTK, we can also hope to understand how they affect the performance of the corresponding finite-width network. Our Contributions In this paper, in addition to showing that the simplicity bias is not universal, we will attempt a first step at understanding the effects of the hyperparameters on generalization from a spectral perspective. At the foundation is a spectral theory of the CK and the NTK on the boolean cube. In Section 3, we show that these kernels, as integral operators on functions over the boolean cube, are diagonalized by the natural Fourier basis, echoing similar results for over the sphere (Smola et al., 2001). We also partially diagonalize the kernels over standard Gaussian, and show that, as expected, the kernels over the different distributions (boolean cube, sphere, standard Gaussian) behave very similarly in high dimensions. However, the spectrum is much easier to compute over the boolean cube: while the sphere and Gaussian eigenvalues would require integration against a kind of polynomials known as the Gegenbauer polynomials, the boolean ones only require calculating a linear combination of a small number of terms. For this reason, in the rest of the paper we focus on analyzing the eigenvalues over the boolean cube. Just as the usual Fourier basis over R has a notion of frequency that can be interpreted as a measure of complexity, so does the boolean Fourier basis (this is just the degree; see Section 3.1). While not perfect, we adopt this natural notion of complexity in this work; a “simple” function is then one that is well approximated by “low frequencies.” This spectral perspective immediately yields that the simplicity bias is not universal (Section 4). In particular, while it seems to hold more or less for relu networks, for sigmoidal networks, the simplicity bias can be made arbitrarily weak by changing the weight variance and the depth. In the extreme case, the random function obtained from sampling a deep erf network with large weights is distributed like a “white noise.” However, there is a very weak sense in which the simplicity bias does hold: the eigenvalues of more “complex” eigenspaces cannot be bigger than those of less “complex” eigenspaces (Thm 4.1). Next, we examine how hyperparameters affect the performance of neural networks through the lens of NTK and its spectrum. To do so, we first need to understand the simpler question of how a kernel affects the accuracy of the function learned by kernel regression. A coarse-grained theory, concerned with big-O asymptotics, exists from classical kernel literature (Yao et al., 2007; Raskutti et al., 2013; Wei et al.; Lin and Rosasco; Schölkopf and Smola, 2002). However, the fine-grained details, required for discerning the effect of hyperparameters, have been much less studied. We make a first attempt at a heuristic, fractional variance (i.e. what fraction of the trace of the kernel does an eigenspace contribute), for understanding how a minute change in kernel effects a change in performance. Intuitively, if an eigenspace has very large fractional variance, so that it accounts for most of the trace, then a ground truth function from this eigenspace should be very easy to learn. Using this heuristic, we make two predictions about neural networks, motivated by observations in the spectra of NTK and CK, and verify them with extensive experiments. • Deeper networks learn more complex features, but excess depth can be detrimental as well. Spectrally, depth can increase fractional variance of an eigenspace, but past an optimal depth, it will also decrease it. (Section 5) Thus, deeper is not always better. • Training all layers is better than training just the last layer when it comes to more complex features, but the opposite is true for simpler features. Spectrally, fractional variances of more “complex” eigenspaces for the NTK are larger than the correponding quantities of the CK. (Section 6) Finally, we use our spectral theory to predict the maximal nondiverging learning rate (“max learning rate”) of SGD (Section 7). In general, we will not only verify our theory with experiments on the theoretically interesting distributions, i.e. uniform measures over the boolean cube and the sphere, or the standard Gaussian, but also confirm these findings on real data like MNIST and CIFAR10 1. 1The code for computing the eigenvalues and for reproducing the plots of this paper is available at github. com/jxVmnLgedVwv6mNcGCBy/NNspectra, which will be open sourced upon publication. For space concerns, we review relevant literature along the flow of the main text, and relegate a more complete discussion of the related research landscape in Appendix A. 2 KERNELS ASSOCIATED TO NEURAL NETWORKS As mentioned in the introduction, we now know several kernels associated to infinite width, randomly initialized neural networks. The most prominent of these are the neural tangent kernel (NTK) (Jacot et al., 2018) and the conjugate kernel (CK) (Daniely et al., 2016), which is also called the NNGP kernel (Lee et al., 2018). We briefly review them below. First we introduce the following notation that we will repeatedly use. Definition 2.1. For φ : R→ R, write Vφ for the function that takes a PSD (positive semidefinite) kernel function to a PSD kernel of the same domain by the formula Vφ(K)(x, x ′) = E f∼N (0,K) φ(f(x))φ(f(x′)). Conjugate Kernel Neural networks are commonly thought of as learning a high-quality embedding of inputs to the latent space represented by the network’s last hidden layer, and then using its final linear layer to read out a classification given the embedding. The conjugate kernel is just the kernel associated to the embedding induced by a random initialization of the neural network. Consider an MLP with widths {nl}l, weight matrices {W l ∈ Rn l×nl−1}l, and biases {bl ∈ Rn l}l, l = 1, . . . , L. For simplicity of exposition, in this paper, we will only consider scalar output nL = 1. Suppose it is parametrized by the NTK parametrization, i.e. its computation is given recursively as h1(x) = σw√ n0 W 1x+ σbb 1 and hl(x) = σw√ nl−1 W lφ(hl−1(x)) + σbb l (MLP) with some hyperparameters σw, σb that are fixed throughout training2. At initialization time, suppose W lαβ , b l α ∼ N (0, 1) for each α ∈ [nl], β ∈ [nl−1]. It can be shown that, for each α ∈ [nl], hlα is a Gaussian process with zero mean and kernel function Σl in the limit as all hidden layers become infinitely wide (nl →∞, l = 1, . . . , L− 1), where Σl is defined inductively on l as Σ1(x, x′) def = σ2w(n 0)−1〈x, x′〉+ σ2b , Σl def = σ2wVφ(Σ l−1) + σ2b (CK) The kernel ΣL corresponding the the last layer L is the network’s conjugate kernel, and the associated Gaussian process limit is the reason for its alternative name Neural Network-Gaussian process kernel. In short, if we were to train a linear model with features given by the embedding x 7→ hL−1(x) when the network parameters are randomly sampled as above, then the CK is the kernel of this linear model. See Daniely et al. (2016); Lee et al. (2018) and Appendix F for more details. Neural Tangent Kernel On the other hand, the NTK corresponds to training the entire model instead of just the last layer. Intuitively, if we let θ be the entire set of parameters {W l}l ∪ {bl}l of Eq. (MLP), then for θ close to its initialized value θ0, we expect hL(x; θ)− hL(x; θ0) ≈ 〈∇θhL(x; θ0), θ − θ0〉 via a naive first-order Taylor expansion. In other words, hL(x; θ)− hL(x; θ0) behaves like a linear model with feature of x given by the gradient taken w.r.t. the initial network, ∇θhL(x; θ0), and the weights of this linear model are the deviation θ− θ0 of θ from its initial value. It turns out that, in the limit as all hidden layer widths tend to infinity, this intuition is correct (Jacot et al., 2018; Lee et al., 2018; Yang, 2019), and the following inductive formula computes the corresponding infinite-width kernel of this linear model: Θ1 def = Σ1, Θl(x, x′) def = Σl(x, x′) + σ2wΘ l−1(x, x′)Vφ′(Σ l−1)(x, x′). (NTK) Computing CK and NTK While in general, computing Vφ and Vφ′ requires evaluating a multivariate Gaussian expectation, in specific cases, such as when φ = relu or erf , there exists explicit, efficient formulas that only require pointwise evaluation of some simple functions (see Facts F.1 and F.2). This allows us to evaluate CK and NTK on a set X of inputs in only time O(|X |2L). 2SGD with learning rate α in this parametrization is roughly equivalent to SGD with learning rate α/width in the standard parametrization with Glorot initialization; see Lee et al. (2018) What Do the Spectra of CK and NTK Tell Us? In summary, the CK governs the distribution of a randomly initialized neural network and also the properties of training only the last layer of a network, while the NTK governs the dynamics of training (all parameters of) a neural network. A study of their spectra thus informs us of the “implicit prior” of a randomly initialized neural network as well as the “implicit bias” of GD in the context of training neural networks. In regards to the implicit prior at initialization, we know from Lee et al. (2018) that a randomly initialized network as in Eq. (MLP) is distributed as a Gaussian process N (0,K), where K is the corresponding CK, in the infinite-width limit. If we have the eigendecomposition K = ∑ i≥1 λiui ⊗ ui (1) with eigenvalues λi in decreasing order and corresponding eigenfunctions ui, then each sample from this GP can be obtained as ∑ i≥1 √ λiωiui, ωi ∼ N (0, 1). If, for example, λ1 ∑ i≥2 λi, then a typical sample function is just a very small perturbation of u1. We will see that for relu, this is indeed the case (Section 4), and this explains the “simplicity bias” in relu networks found by Valle-Pérez et al. (2018). Training the last layer of a randomly initialized network via full batch gradient descent for an infinite amount of time corresponds to Gaussian process inference with kernel K (Lee et al., 2018; 2019). A similar intuition holds for NTK: training all parameters of the network (Eq. (MLP)) for an infinite amount of time yields the mean prediction of the GPN (0,NTK) in expectation; see Lee et al. (2019) and Appendix F.4 for more discussion. Thus, the more the GP prior (governed by the CK or the NTK) is consistent with the ground truth function f∗, the more we expect the Gaussian process inference and GD training to generalize well. We can measure this consistency in the “alignment” between the eigenvalues λi and the squared coefficients a2i of f ∗’s expansion in the {ui}i basis. The former can be interpreted as the expected magnitude (squared) of the ui-component of a sample f ∼ N (0,K), and the latter can be interpreted as the actual magnitude squared of such component of f∗. In this paper, we will investigate an even cleaner setting where f∗ = ui is an eigenfunction. Thus we would hope to use a kernel whose ith eigenvalue λi is as large as possible. Neural Kernels From the forms of the equation Eqs. (CK) and (NTK) and the fact that Vφ(K)(x, x ′) only depends on K(x, x),K(x, x′), and K(x′, x′), we see that CK or NTK of MLPs takes the form K(x, y) = Φ ( 〈x, y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) (2) for some function Φ : R3 → R. We will refer to this kind of kernel as Neural Kernel in this paper. Kernels as Integral Operators We will consider input spaces of various forms X ⊆ Rd equipped with some probability measure. Then a kernel function K acts as an integral operator on functions f ∈ L2(X ) by Kf(x) = (Kf)(x) = E y∼X K(x, y)f(y). We will use the “juxtaposition syntax” Kf to denote this application of the integral operator. 3 Under certain assumptions, it then makes sense to speak of the eigenvalues and eigenfunctions of the integral operator K. While we will appeal to an intuitive understanding of eigenvalues and eigenfunctions in the main text below, we include a more formal discussion of Hilbert-Schmidt operators and their spectral theory in Appendix G for completeness. In the next section, we investigate the eigendecomposition of neural kernels as integral operators over different distributions. 3In cases when X is finite, K can be also thought of as a big matrix and f as a vector — but do not confuse Kf with their multiplication! If we use · to denote matrix multiplication, then the operator application Kf is the same as the matrix multiplication K ·D · f where D is the diagonal matrix encoding the probability values of each point in X . 3 THE SPECTRA OF NEURAL KERNELS 3.1 BOOLEAN CUBE We first consider a neural kernelK on the boolean cubeX = ddef= {±1}d, equipped with the uniform measure. In this case, since each x ∈ X has the same norm, K(x, y) = Φ ( 〈x,y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) effectively only depends on 〈x, y〉, so we will treat Φ as a single variate function in this section, Φ(c) = Φ(c, 1, 1). Brief review of basic Fourier analysis on the boolean cube d (O’Donnell (2014)). The space of real functions on d forms a 2d-dimensional space. Any such function has a unique expansion into a multilinear polynomial (polynomials whose monomials do not contain xpi , p ≥ 2, of any variable xi). For example, the majority function over 3 bits has the following unique multilinear expansion maj3 : 3 → 1, maj3(x1, x2, x3) = 1 2 (x1 + x2 + x3 − x1x2x3). In the language of Fourier analysis, the 2d multilinear monomial functions χS(x) def = xS def = ∏ i∈S xi, for each S ⊆ [d] (3) form a Fourier basis of the function space L2( d) = {f : d → R}, in the sense that their inner products satisfy E x∼ d χS(x)χT (x) = I(S = T ). Thus, any function f : d → R can be always written as f(x) = ∑ S⊆[d] f̂(S)χX(x) for a unique set of coefficients {f̂(S)}S⊆[d]. It turns out that K is always diagonalized by this Fourier basis {χS}S⊆[d]. Theorem 3.1. On the d-dimensional boolean cube d, for every S ⊆ [d], χS is an eigenfunction of K with eigenvalue µ|S| def = E x∈ d xSK(x,1) = E x∈ d xSΦ (∑ i xi/d ) , (4) where 1 = (1, . . . , 1) ∈ d. This definition of µ|S| does not depend on the choice S, only on the cardinality of S. These are all of the eigenfunctions of K by dimensionality considerations.4 Define T∆ to be the shift operator on functions over [−1, 1] that sends Φ(·) to Φ(· −∆). Then we can re-express the eigenvalue as follows. Lemma 3.2. With µk as in Thm 3.1, µk = 2 −d(I − T∆)k(I + T∆)d−kΦ(1) (5) = 2−d d∑ r=0 Cd−k,kr Φ (( d 2 − r ) ∆ ) (6) where Cd−k,kr def = ∑ j=0 (−1)r+j ( d− k j )( k r − j ) . (7) Eq. (5) will be important for computational purposes, and we will come back to discuss this more in Section 3.5. It also turns out µk affords a pretty expression via the Fourier series coefficients of Φ. As this is not essential to the main text, we relegate its exposition to Appendix H.1. 4Readers familiar with boolean Fourier analysis may be reminded of the noise operator Tρ, ρ ≤ 1 (O’Donnell, 2014, Defn 2.46). In the language of this work, Tρ is a neural kernel with eigenvalues µk = ρk. 3.2 SPHERE Now let’s consider the case when X = √ dSd−1 is the radius√ d sphere in Rd equipped with the uniform measure. Again, because x ∈ X all have the same norm, we will treat Φ as a univariate function with K(x, y) = Φ(〈x, y〉/‖x‖‖y‖) = Φ(〈x, y〉/d). As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), K is diagonalized by spherical harmonics, and the eigenvalues are given by the coefficients of Φ against a system of orthogonal polynomials called Gegenbuaer polynomials. We relegate a complete review of this topic to Appendix H.2. 3.3 ISOTROPIC GAUSSIAN Now let’s consider X = Rd equipped with standard isotropic Gaussian N (0, I), so that K behaves like Kf(x) = E y∼N (0,I) K(x, y)f(y) = E y∼N (0,I) Φ ( 〈x, y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) f(y) for any f ∈ L2(N (0, I)). In contrast to the previous two sections, K will essentially depend on the effect of the norms ‖x‖ and ‖y‖ on Φ. Nevertheless, because an isotropic Gaussian vector can be obtained by sampling its direction uniformly from the sphere and its magnitude from a chi distribution, K can still be partially diagonalized into a sum of products between spherical harmonics and kernels on R equipped with a chi distribution (Thm H.14). In certain cases, we can obtain complete eigendecompositions, for example when Φ is positive homogeneous. See Appendix H.3 for more details. 3.4 KERNEL IS SAME OVER BOOLEAN CUBE, SPHERE, OR GAUSSIAN WHEN d 1 The reason we have curtailed a detailed discussion of neural kernels on the sphere and on the standard Gaussian is because, in high dimension, the kernel behaves the same under these distributions as under uniform distribution over the boolean cube. Indeed, by intuition along the lines of the central limit theorem, we expect that uniform distribution over a high dimension boolean cube should approximate high dimensional standard Gaussian. Similarly, by concentration of measure, most of the mass of a Gaussian is concentrated around a thin shell of radius √ d. Thus, morally, we expect the same kernel function K induces approximately the same integral operator on these three distributions in high dimension, and as such, their eigenvalues should also approximately coincide. We verify empirically and theoretically this is indeed the case in Appendix H.4. 3.5 COMPUTING THE EIGENVALUES As the eigenvalues of K over the different distributions are very close, we will focus in the rest of this paper on eigenvalues over the boolean cube. This has the additional benefit of being much easier to compute. Each eigenvalue over the sphere and the standard Gaussian requires an integration of Φ against a Gegenbauer polynomial. In high dimension d, these Gegenbauer polynomials varies wildly in a sinusoidal fashion, and blows up toward the boundary (see Fig. 15 in the Appendix). As such, it is difficult to obtain a numerically stable estimate of this integral in an efficient manner when d is large. In contrast, we have multiple ways of computing boolean cube eigenvalues, via Eqs. (5) and (6). In either case, we just take some linear combination of the values of Φ at a grid of points on [−1, 1], spaced apart by ∆ = 2/d. While the coefficients Cd−k,kr (defined in Eq. (7)) are relatively efficient to compute, the change in the sign of Cd−k,kr makes this procedure numerically unstable for large d. Instead, we use Eq. (5) to isolate the alternating part to evaluate in a numerically stable way: Since µk = ( I+T∆ 2 )d−k ( I−T∆ 2 )k Φ(1), we can evaluate Φ̃ def= ( I−T∆ 2 )k Φ via k finite differences, and then compute ( I + T∆ 2 )d−k Φ̃(1) = 1 2d−k d−k∑ r=0 ( d− k r ) Φ̃(1− r∆). (8) When Φ arises from the CK or the NTK of an MLP, all derivatives of Φ at 0 are nonnegative (Thm I.3). Thus intuitively, the finite difference Φ̃ should be also all nonnegative, and this sum can be evaluated without worry about floating point errors from cancellation of large terms. A slightly more clever way to improve the numerical stability when 2k ≤ d is to note that (I + T∆)d−k (I − T∆)k Φ(1) = (I + T∆)d−2k ( I − T 2∆ )k Φ(1) = (I + T∆)d−2k (I − T2∆)k Φ(1). So an improved algorithm is to first compute the kth finite difference (I − T2∆)k with the larger step size 2∆, then compute the sum (I + T∆)d−2k as in Eq. (8). 4 CLARIFYING THE “SIMPLICITY BIAS” OF RANDOM NEURAL NETWORKS As mentioned in the introduction, Valle-Pérez et al. (2018) claims that neural networks are biased toward simple functions. We show that this phenomenon depends crucially on the nonlinearity, the sampling variances, and the depth of the network. In Fig. 1(a), we have repeated their experiment for 104 random functions obtained by sampling relu neural networks with 2 hidden layers, 40 neurons each, following Valle-Pérez et al. (2018)’s architectural choices5. We also do the same for erf networks of the same depth and width, varying as well the sampling variances of the weights and biases, as shown in the legend. As discussed in Valle-Pérez et al. (2018), for relu, there is indeed this bias, where a single function gets sampled more than 10% of the time. However, for erf, as we increase σ2w, we see this bias disappear, and every function in the sample gets sampled only once. This phenomenon can be explained by looking at the eigendecomposition of the CK, which is the Gaussian process kernel of the distribution of the random networks as their hidden widths tend to infinity. In Fig. 1(b), we plot the normalized eigenvalues {µk/ ∑7 i=0 ( 7 i ) µi}7k=0 for the CKs corresponding to the networks sampled in Fig. 1(a). Immediately, we see that for relu and σ2w = σ 2 b = 2, the degree 0 eigenspace, corresponding to constant functions, accounts for more than 80% of the variance. This means that a typical infinite-width relu network of 2 layers is expected to be almost constant, and this should be even more true after we threshold the network to be a boolean function. On the other hand, for erf and σb = 0, the even degree µks all vanish, and most of the variance comes from degree 1 components (i.e. linear functions). This concentration in degree 1 also lessens as σ2w increases. But because this variance is spread across a dimension 7 eigenspace, we don’t see duplicate function samples nearly as much as in the relu case. As σw increases, we also see the eigenvalues become more equally distributed, which corresponds to the flattening of 5Valle-Pérez et al. (2018) actually performed their experiments over the {0, 1}7 cube, not the {±1}7 cube we are using here. This does not affect our conclusion. See Appendix J for more discussion the probability-vs-rank curve in Fig. 1(a). Finally, we observe that a 32-layer erf network with σ2w = 4 has all its nonzero eigenvalues (associated to odd degrees) all equal (see points marked by ∗ in Fig. 1(b)). This means that its distribution is a "white noise" on the space of odd functions, and the distribution of boolean functions obtained by thresholding the Gaussian process samples is the uniform distribution on odd functions. This is the complete lack of simplicity bias modulo the oddness constraint. However, from the spectral perspective, there is a weak sense in which a simplicity bias holds for all neural network-induced CKs and NTKs. Theorem 4.1 (Weak Spectral Simplicity Bias). Let K be the CK or NTK of an MLP on a boolean cube d. Then the eigenvalues µk, k = 0, . . . , d, satisfy µ0 ≥ µ2 ≥ · · · ≥ µ2k ≥ · · · , µ1 ≥ µ3 ≥ · · · ≥ µ2k+1 ≥ · · · . (9) Even though it’s not true that the fraction of variance contributed by the degree k eigenspace is decreasing with k, the eigenvalue themselves will be in a nonincreasing pattern across even and odd degrees. In fact, if we fix k and let d→∞, then we can show that (Thm I.6) µk = Θ(d −k). Of course, as we have seen, this is a very weak sense of simplicity bias, as it doesn’t prevent “white noise” behavior as in the case of erf CK with large σ2w and large depth. 5 DEEPER NETWORKS LEARN MORE COMPLEX FEATURES In the rest of this work, we compute the eigenvalues µk over the 128-dimensional boolean cube ( d, with d = 128) for a large number of different hyperparameters, and analyze how the latter affect the former. We vary the degree k ∈ [0, 8], the nonlinearity between relu and erf, the depth (number of hidden layers) from 1 to 128, and σ2b ∈ [0, 4]. We fix σ2w = 2 for relu kernels, but additionally vary σ2w ∈ [1, 5] for erf kernels. Comprehensive contour plots of how these hyperparameters affect the kernels are included in Appendix D, but in the main text we summarize several trends we see. We will primarily measure the change in the spectrum by the degree k fractional variance, which is just degree k fractional variance def= ( d k ) µk∑d i=0 ( d i ) µi . This terminology comes from the fact that, if we were to sample a function f from a Gaussian process with kernel K, then we expect that r% of the total variance of f comes from degree k components of f , where r% is the degree k fractional variance. If we were to try to learn a homogeneous degree-k polynomial using a kernel K, intuitively we should try to choose K such that its µk is maximized, relative to other eigenvalues. Fig. 3(a) shows that this is indeed the case even with neural networks: over a large number of different hyperparameter settings, degree k fractional variance is inversely related to the validation loss incurred when learning a degree k polynomial. However, this plot also shows that there does not seem like a precise, clean relationship between fractional variance and validation loss. Obtaining a better measure for predicting generalization is left for future work. Before we continue, we remark that the fractional variance of a fixed degree k converges to a fixed value as the input dimension d→∞: Theorem 5.1 (Asymptotic Fractional Variance). Let K be the CK or NTK of an MLP on a boolean cube d. ThenK can be expressed asK(x, y) = Φ(〈x, y〉/d) for some analytic function Φ : R→ R. If we fix k and let the input dimension d→∞, then the fractional variance of degree k converges to (k!)−1Φ(k)(0)/Φ(1) = (k!)−1Φ(k)(0)∑ j≥0(j!) −1Φ(j)(0) where Φ(k) denotes the kth derivative of Φ. For the fractional variances we compute in this paper, their values at d = 128 are already very close to their d→∞ limit, so we focus on the d = 128 case experimentally. If K were to be the CK or NTK of a relu or erf MLP, then we find that for higher k, the depth of the network helps increase the degree k fractional variance. In Fig. 2(a) and (b), we plot, for each degree k, the depth that (with some combination of other hyperparameters like σ2b ) achieves this maximum, for respectively relu and erf kernels. Clearly, the maximizing depths are increasing with k for relu, and also for erf when considering either odd k or even k only. The slightly differing behavior between even and odd k is expected, as seen in the form of Thm 4.1. Note the different scales of y-axes for relu and erf — the depth effect is much stronger for erf than relu. For relu NTK and CK, σ2b = 0 maximizes fractional variance in general, and the same holds for erf NTK and CK in the odd degrees (see Appendix D). In Fig. 2(c) and Fig. 2(d) we give a more fine-grained look at the σ2b = 0 slice, via heatmaps of fractional variance against degree and depth. Brighter color indicates higher variance, and we see the optimal depth for each degree k clearly increases with k for relu NTK, and likewise for odd degrees of erf NTK. However, note that as k increases, the difference between the maximal fractional variance and those slightly suboptimal becomes smaller and smaller, reflected by suppressed range of color moving to the right. The heatmaps for relu and erf CKs look similar and are omitted. We verify this increase of optimal depth with degree in Fig. 3(b). There we have trained relu networks of varying depth against a ground truth multilinear polynomial of varying degree. We see clearly that the optimal depth is increasing with degree. We also verify this phenomenon when the input distribution changes to the standard Gaussian or the uniform distribution over the sphere √ dSd−1; see Fig. 6. Note that implicit in our results here is a highly nontrivial observation: Past some point (the optimal depth), high depth can be detrimental to the performance of the network, beyond just the difficulty to train, and this detriment can already be seen in the corresponding NTK or CK. In particular, it’s not true that the optimal depth is infinite. We confirm the existence of such an optimal depth even in real distributions like MNIST and CIFAR10; see Fig. 7. This adds significant nuance to the folk wisdom that “depth increases expressivity and allows neural networks to learn more complex features.” 6 NTK FAVORS MORE COMPLEX FEATURES THAN CK We generally find the degree k fractional variance of NTK to be higher than that of CK when k is large, and vice versa when k is small, as shown in Fig. 4. This means that, if we train only the last layer of a neural network (i.e. CK dynamics), we intuitively should expect to learn simpler features better, while, if we train all parameters of the network (i.e. NTK dynamics), we should expect to learn more complex features better. Similarly, if we were to sample a function from a Gaussian process with the CK as kernel (recall this is just the distribution of randomly initialized infinite width MLPs (Lee et al., 2018)), this function is more likely to be accurately approximated by low degree polynomials than the same with the NTK. We verify this intuition by training a large number of neural networks against ground truth functions of various homogeneous polynomials of different degrees, and show a scatterplot of how training the last layer only measures against training all layers (Fig. 3(c)). This phenomenon remains true over the standard Gaussian or the uniform distribution on the sphere (Fig. 8). Consistent with our theory, the only place training the last layer works meaningfully better than training all layers is when the ground truth is a constant function. However, we reiterate that fractional variance is an imperfect indicator of performance. Even though for erf neural networks and k ≥ 1, degree k fractional variance of NTK is not always greater than that of the CK, we do not see any instance where training the last layer of an erf network is better than training all layers. We leave an investigation of this discrepancy to future work. 7 PREDICTING THE MAXIMUM LEARNING RATE In any setup that tries to push deep learning benchmarks, learning rate tuning is a painful but indispensable part. In this section, we show that our spectral theory can accurately predict the maximal nondiverging learning rate over real datasets as well as toy input distributions, which would help set the correct upper limit for a learning rate search. By Jacot et al. (2018), in the limit of large width and infinite data, the function g : X → R represented by our neural network evolves like gt+1 = gt − 2αK(gt − g∗), t = 0, 1, 2, . . . , (10) when trained under full batch GD (with the entire population) with L2 loss L(f, g) = Ex∼X (f(x)− g(x))2, ground truth g∗, and learning rate α, starting from randomly initialization. If we train only the last layer, then K is the CK; if we train all layers, then K is the NTK. Given an eigendecomposition of K as in Eq. (1), if g0 − g∗ = ∑ i aiui is the decomposition of g 0 in the eigenbasis {ui}i, then one can easily deduce that gt − g∗ = ∑ i ai(1− 2αλi)tui. Consequently, we must have α < (maxi λi) −1 in order for Eq. (10) to converge 6 When the input distribution is the uniform distribution over d, the maximum learning rate is max(µ0, µ1) by Thm 4.1. By Thm 5.1, as long as the Φ function corresonding to K has Φ(0) 6= 0, when d is large, we expect µ0 ≈ Φ(0) but µ1 ∼ d−1Φ′(0) µ0. Therefore, we should predict 1Φ(0) for the maximal learning rate when training on the boolean cube. However, as Fig. 5 shows, this prediction is accurate not only for the boolean cube, but also over the sphere, the standard Gaussian, and even MNIST and CIFAR10! 8 CONCLUSION In this work, we have taken a first step at studying how hyperparameters change the initial distribution and the generalization properties of neural networks through the lens of neural kernels and their spectra. We obtained interesting insights by computing kernel eigenvalues over the boolean cube and relating them to generalization through the fractional variance heuristic. While it inspired valid predictions that are backed up by experiments, fractional variance is clearly just a rough indicator. We hope future work can refine on this idea to produce a much more precise prediction of test loss. Nevertheless, we believe the spectral perspective is the right line of research that will not only shed light on mysteries in deep learning but also inform design choices in practice. A RELATED WORKS The Gaussian process behavior of neural networks was found by Neal (1995) for shallow networks and then extended over the years to different settings and architectures (Williams, 1997; Le Roux and Bengio, 2007; Hazan and Jaakkola, 2015; Daniely et al., 2016; Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018). This connection was exploited implicitly or explicitly to build new models (Cho and Saul, 2009; Lawrence and Moore, 2007; Damianou and Lawrence, 2013; Wilson et al., 2016a;b; Bradshaw et al., 2017; van der Wilk et al., 2017; Kumar et al., 2018; Blomqvist et al., 2018; Borovykh, 2018; Garriga-Alonso et al., 2018; Novak et al., 2018; Lee et al., 2018). The Neural Tangent Kernel is a much more recent discovery by Jacot et al. (2018) and later Allen-Zhu et al. (2018a;c;b); Du et al. (2018); Arora et al. (2019b); Zou et al. (2018) came upon the same reasoning independently. Like CK, NTK has also been applied toward building new models or algorithms (Arora et al., 2019a; Achiam et al., 2019). Closely related to the discussion of CK and NTK is the signal propagation literature, which tries to understand how to prevent pathological behaviors in randomly initialized neural networks when they are deep (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017; 2018; Hanin, 2018; Hanin and Rolnick, 2018; Chen et al., 2018; Yang et al., 2019; Pennington et al., 2017a; Hayou et al., 2018; Philipp and Carbonell, 2018). This line of work can trace its original at least to the advent of the Glorot and He initialization schemes for deep networks (Glorot and Bengio, 2010; He et al., 2015). The investigation of forward signal propagation, or how random neural networks change with depth, corresponds to studying the infinite-depth limit of CK, and the investigation of backward signal propagation, or how gradients of random networks change with depth, corresponds to studying the infinite-depth limit of NTK. Some of the quite remarkable results from this literature includes how to train a 10,000 layer CNN (Xiao et al., 2017) and that, counterintuitively, batch normalization causes gradient explosion (Yang et al., 2019). This signal propagation perspective can be refined via random matrix theory (Pennington et al., 2017a; 2018). In these works, free probability is leveraged to compute the singular value distribution of the input-output map given by the random neural network, as the input dimension and width tend to infinity together. Other works also investigate various questions of neural network training and generalization from the random matrix perspective (Pennington and Worah, 2017; Pennington and Bahri, 2017; Pennington and Worah, 2018). Yang (2019) presents a common framework, known as Tensor Programs, unifying the GP, NTK, signal propagation, and random matrix perspectives, as well as extending them to new scenarios, like recurrent neural networks. It proves the existence of and allows the computation of a large number of infinite-width limits (including ones relevant to the above perspectives) by expressing the quantity of interest as the output of a computation graph and then manipulating the graph mechanically. Several other works also adopt a spectral perspective on neural networks (Candès, 1999; Sonoda and Murata, 2017; Eldan and Shamir, 2016; Barron, 1993; Xu et al., 2018; Zhang et al., 2019; Xu et al., 2019; Xu, 2018); here we highlight a few most relevant to us. Rahaman et al. (2018) studies the real Fourier frequencies of relu networks and perform experiments on real data as well as synthetic ones. They convincingly show that relu networks learn low frequencies components first. They also investigate the subtleties when the data manifold is low dimensional and embedded in various ways in the input space. In contrast, our work focuses on the spectra of the CK and NTK (which indirectly informs the Fourier frequencies of a typical network). Nevertheless, our results are complementary to theirs, as they readily explain the low frequency bias in relu that they found. Karakida et al. (2018) studies the spectrum of the Fisher information matrix, which share the nonzero eigenvalues with the NTK. They compute the mean, variance, and maximum of the eigenvalues Fisher eigenvalues (taking the width to infinity first, and then considering finite amount of data sampled iid from a Gaussian). In comparison, our spectral results yield all eigenvalues of the NTK (and thus also all nonzero eigenvalues of the Fisher) as well as eigenfunctions. Finally, we note that several recent works (Xie et al., 2016; Bietti and Mairal, 2019; Basri et al., 2019; Ghorbani et al., 2019) studied one-hidden layer neural networks over the sphere, building on Smola et al. (2001)’s observation that spherical harmonics diagonalize dot product kernels, with the latter two concurrent to us. This is in contrast to the focus on boolean cube here, which allows us to study the fine-grained effect of hyperparameters on the spectra, leading to a variety of insights into neural networks’ generalization properties. B UNIVERSALITY OF OUR BOOLEAN CUBE OBSERVATIONS IN OTHER INPUT DISTRIBUTIONS Using the spectral theory we developed in this paper, we made three observations, that can be roughly summarized as follows: 1) the simplicity bias noted by Valle-Pérez et al. (2018) is not universal; 2) for each function of fixed “complexity” there is an optimal depth such that networks shallower or deeper will not learn it as well; 3) training last layer only is better than training all layers when learning “simpler” features, and the opposite is true for learning “complex” features. In this section, we discuss the applicability of these observations to distributions that are not uniform over the boolean cube: in particular, the uniform distribution over the sphere √ dSd−1, the standard Gaussian N (0, Id), as well as realistic data distributions such as MNIST and CIFAR10. Simplicity bias The simplicity bias noted by Valle-Pérez et al. (2018), in particular Fig. 1, depends on the finiteness of the boolean cube as a domain, so we cannot effectively test this on the distributions above, which all have uncountable support. Optimal depth With regard to the second observation, we can test whether an optimal depth exists for learning functions over the distributions above. Since polynomial degrees remain the natural indicator of complexity for the sphere and the Gaussian (see Appendices H.2 and H.3 for the relevant spectral theory), we replicated the experiment in Fig. 3(b) for these distributions, using the same ground truth functions of polynomials of different degrees. The results are shown in Fig. 6. We see the same phenomenon as in the boolean cube case, with an optimal depth for each degree, and with the optimal depth increasing with degree. For MNIST and CIFAR10, the notion of “feature complexity” is less clear, so we will not test the hypothesis that “optimal depth increases with degree” for these distributions but only test for the existence of the optimal depth for the ground truth marked by the labels of the datasets. We do so by training a large number of MLPs of varying depth on these datasets until convergence, and plot the results in Fig. 7. This figure clearly shows that such an optimal depth exists, such that shallower or deeper networks do monotonically worse as the depth diverge away from this optimal depth. Again, the existence of the optimal depth is not obvious at all, as conventional deep learning wisdom would have one believe that adding depth should always help. Training last layer only vs training all layers Finally, we repeat the experiment in Fig. 3(c) for the sphere and the standard Gaussian, with polynomials of different degrees as ground truth functions. The results are shown in Fig. 8. We see the same phenomenon as in the boolean cube case: for degree 0 polynomials, training last layer works better in general, but for higher degree polynomials, training all layers fares better. Note that, unlike the sphere and the Gaussian, whose spectral theory tells us that (harmonic) polynomial degree is a natural notion of complexity, for MNIST and CIFAR10 we have much less clear idea of what a “complex” or a “simple” feature is. Therefore, we did not attempt a similar experiment on these datasets. C THEORETICAL VS EMPIRICAL MAX LEARNING RATES UNDER DIFFERENT PREPROCESSING FOR MNIST AND CIFAR10 In the main text Fig. 5, on the MNIST and CIFAR10 datasets, we preprocessed the data by centering and normalizing to the sphere (see Appendix E.2 for a precise description). With this preprocessing, our theory accurately predicts the max learning rate in practice. In general, if we go by another preprocessing, such as PCA or ZCA, or no preprocessing, our theoretical max learning rate 1/Φ(0) is less accurate but still correlated in general. The only exception seems to be relu networks on PCA- or ZCA- preprocessed CIFAR10. See Fig. 9. Theoretical vs empirical max learning rate under different preprocessing D VISUALIZING THE SPECTRAL EFFECTS OF σ2w, σ 2 b , AND DEPTH While in the main text, we summarized several trends of interest kn several plots, they do not give the entire picture of how eigenvalues and fractional variances vary with σ2w, σ 2 b , and depth. Here we try to present this relationship more completely in a series of contour plots. Fig. 10 shows how varying depth and σ2b changes the fractional variances of each degree, for relu CK and NTK. We are fixing σ2w = 2 in the CK plots, as the fractional variances only depend on the ratio σ 2 b/σ 2 w; even though this is not true for relu NTK, we fix σ2w = 2 as well for consistency. For erf, however, the fractional variance will crucially depend on both σ2w and σ 2 b , so we present 3D contour plots of how σ 2 w, σ 2 b , and depth changes fractional variance in Fig. 13. Complementarily, we also show in Figs. 11 and 12 a few slices of these 3D contour plots for different fixed values of σ2b , for erf CK and NTK. E EXPERIMENTAL DETAILS E.1 FIG. 3 Fig. 3(a), (b) and (c) differ in the set of hyperparameters they involve (to be specified below), but in all of them, we train relu networks against a randomly generated ground truth multilinear polynomial, with input space 128 and L2 loss L(f) = Ex∈ d(f(x)− f∗(x))2. Training We perform SGD with batch size 1000. In each iteration, we freshly sample a new batch, and we train for a total of 100,000 iterations, so the network potentially sees 108 different examples. At every 1000 iterations, we validate the current network on a freshly drawn batch of 10,000 examples. We thus record a total of 100 validation losses, and we take the lowest to be the “best validation loss.” Generating the Ground Truth Function The ground truth function f∗(x) is generated by first sampling 10 monomials m1, . . . ,m10 of degree k, then randomly sampling 10 coefficients a1, . . . , a10 for them. The final function is obtained by normalizing {ai} such that the sum of their squares is 1: f∗(x) def = 10∑ i=1 aimi/ 10∑ j=1 a2j . (11) Hyperparameters for Fig. 3(a) • The learning rate is half the theoretical maximum learning rate7 12 max(µ0, µ1) −1 • Ground truth degree k ∈ {0, 1, 2, 3} • Depth ∈ {0, . . . , 10} • activation = relu • σ2w = 2 • σ2b = 0 • width = 1000 • 10 random seeds per hyperparameter combination • training last layer (marked “ck”), or all layers (marked “ntk”). In the latter case, we use the NTK parametrization of the MLP (Eq. (MLP)). Hyperparameters for Fig. 3(b) • The learning rate is half the theoretical maximum learning rate 12 max(µ0, µ1) −1 • Ground truth degree k ∈ {0, 1, 2, 3} • Depth ∈ {0, . . . , 10} • activation = relu • σ2w = 2 • σ2b = 0 • width = 1000 • 100 random seeds per hyperparameter combination • training last layer weight and bias only 7Note that, because the L2 loss here is L(f) = Ex∈ d(f(x) − f∗(x))2, the maximum learning rate is λ−1max = max(µ0, µ1) −1 (see Thm 4.1). If we instead adopt the convention L(f) = Ex∈ d 12 (f(x)− f ∗(x))2, then the maximum learning rate would be 2λ−1max = 2max(µ0, µ1)−1 Algorithm 1 Binary Search for Empirical Max Learning Rate upper ← 16× theoretical max lr lower ← 0 tol← 0.01× theoretical max lr while |upper − lower| > tol do α← (upper + lower)/2 Run SGD with learning rate α for 1000 iterations if loss diverges then upper ← α else lower ← α end if end while Output: upper Hyperparameters for Fig. 3(c) • The learning rate ∈ {0.05, 0.1, 0.5} • Ground truth degree k ∈ {0, 1, . . . , 6} • Depth ∈ {1, . . . , 5} • activation ∈ {relu, erf} • σ2w = 2 for relu, but σ2w ∈ {1, 2, . . . , 5} for erf • σ2b ∈ {0, 1, . . . , 4} • width = 1000 • 1 random seed per hyperparameter combination • Training all layers, using the NTK parametrization of the MLP (Eq. (MLP)) E.2 MAX LEARNING RATE EXPERIMENTS Here we describe the experimental details for the experiments underlying Figs. 5 and 9. Theoretical max learning rate For a fixed setup, we compute Φ according to Eq. (CK) (if only last layer is trained) or Eq. (NTK) (if all layers are trained). For ground truth problems where the output is n-dimensional, the theoretical max learning rate is nΦ(0)−1; in particular, the max learning rates for MNIST and CIFAR10 are 10 times those for boolean cube, sphere, and Gaussian. This is because the kernel for an multi-output problem effectively becomes 1 n K⊕n = 1 n K 0 0 0 . . . 0 0 0 K where the 1n factor is due to the 1 n factor in the scaled square loss L(f, f ∗) = Ex∼X 1n ∑n i=1(f(x)i− f∗(x)i) 2. The top eigenvalue for 1nK ⊕n is just 1n times the top eigenvalue for K. Empirical max learning rate For a fixed setup, we perform binary search for the empirical max learning rate as in Algorithm 1. Preprocessing In Fig. 5, for MNIST and CIFAR10, we center and project each image onto the sphere √ dSd−1, where d = 28× 28 = 784 for MNIST and d = 3× 32× 32 = 3072 for CIFAR10. More precisely, we compute the average image x̄ over the entire dataset, and we preprocess each image x as √ d x−x̄‖x−x̄‖ . In Fig. 9, there are three different preprocessing schemes. For “no preprocessing,” we load the MNIST and CIFAR10 data as is. In “PCA128,” we take the top 128 eigencomponents of the data, so that the data has only 128 dimensions. In “ZCA128,” we take the top 128 eigencomponents but rotate it back to the original space, so that the data still has dimension d, where d = 28×28 = 784 for MNIST and d = 3× 32× 32 = 3072 for CIFAR10. Hyperparameters • Target function: For boolean cube, sphere, and standard Gaussian, we randomly sample a degree 1 polynomial as in Eq. (11). For MNIST and CIFAR10, we just use the label in the dataset, encoded as a one-hot vector for square-loss regression. • Depth ∈ {1, 2, 4, 8, 16} • activation ∈ {relu, erf} • σ2w = 2 for relu, but σ2w ∈ {1, 2, . . . , 5} for erf • σ2b ∈ {1, . . . , 4} • width = 1000 • 1 random seed per hyperparameter combination • Training last layer (CK) or all layers (NTK). In the latter case, we use the NTK parametriza- tion of the MLP (Eq. (MLP)). F REVIEW OF THE THEORY OF NEURAL TANGENT KERNELS F.1 CONVERGENCE OF INFINITE-WIDTH KERNELS AT INITIALIZATION Conjugate Kernel Via a central-limit-like intuition, each unit hl(x)α of Eq. (MLP) should behave like a Gaussian as width nl−1 →∞, as it is a sum of a large number of roughly independent random variables (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017). The devil, of course, is in what “roughly independent” means and how to apply the central limit theorem (CLT) to this setting. It can be done, however, (Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018), and in the most general case, using a “Gaussian conditioning” technique, this result can be rigorously generalized to almost any architecture Yang (2019). In any case, the consequence is that, for any finite set S ⊆ X , {hlα(x)}x∈S converges in distribution to N (0,Σl(S, S)), as min{n1, . . . , nl−1} → ∞, where Σl is the CK as given in Eq. (CK). Neural Tangent Kernel By a slightly more involved version of the “Gaussian conditioning” technique, Yang (2019) also showed that, for any x, y ∈ X , 〈∇θhL(x),∇θhL(y)〉 converges almost surely to ΘL(x, y) as the widths tend to infinity, where Θl is the NTK as given in Eq. (NTK). F.2 FAST EVALUATIONS OF CK AND NTK For certain φ like relu or erf, Vφ and V′φ can be evaluated very quickly, so that both the CK and NTK can be computed in O(|X |2L) time, where X is the set of points we want to compute the kernel function over, and L is the number of layers. Fact F.1 (Cho and Saul (2009)). For any kernel K Vrelu(K)(x, x ′) = 1 2π ( √ 1− c2 + (π − arccos c)c) √ K(x, x)K(x′, x′) V′relu(K)(x, x ′) = 1 2π (π − arccos c) where c = K(x, x′)/ √ K(x, x)K(x′, x′). Fact F.2 (Neal (1995)). For any kernel K, Verf(K)(x, x ′) = 2 π arcsin K(x, x′)√ (K(x, x) + 0.5)(K(x′, x′) + 0.5) V′erf(K)(x, x ′) = 4 π √ (1 + 2K(x, x))(1 + 2K(x′, x′))− 4K(x, x′)2 . Fact F.3. Let φ(x) = exp(x/σ) for some σ > 0. For any kernel K, Vφ(K)(x, x ′) = exp ( K(x, x) + 2K(x, x′) +K(x′, x′) 2σ2 ) . F.3 LINEAR EVOLUTION OF NEURAL NETWORK UNDER GD Remarkably, the NTK governs the evolution of the neural network function under gradient descent in the infinite-width limit. First, let’s consider how the parameters θ and the neural network function f evolve under continuous time gradient flow. Suppose f is only defined on a finite input space X = {x1, . . . , xk}. We will visualize f(X ) = f(x1) ... f(xk) , ∇fL = ∂L ∂f(x1) ... ∂L ∂f(xk) , θ = θ1 ... θn , ∇θf = ∂f(x1) ∂θ1 · · · ∂f(x k) ∂θ1 ... . . . ... ∂f(x1) ∂θn · · · ∂f(x k) ∂θn (best viewed in color). Then under continuous time gradient descent with learning rate η, ∂t θt = −η∇θL(ft) = −η ∇θft · ∇fL(ft) , ∂t ft = ∇θft > · ∂t θt = −η ∇θft > · ∇θft · ∇fL(ft) = −η Θt · ∇fL(ft) (12) where Θt = ∇θf>t · ∇θft ∈ Rk×k is of course the (finite width) NTK. These equations can be visualized as ∂t = −η · , ∂t = · ∂t = −η · · = −η · Thus f undergoes kernel gradient descent with (functional) loss L(f) and kernel Θt. This kernel Θt of course changes as f evolves, but remarkably, it in fact stays constant for f being an infinitely wide MLP (Jacot et al., 2018): ∂tft = −ηΘ · ∇fL(ft), (Training All Layers) where Θ is the infinite-width NTK corresponding to f . A similar equation holds for the CK Σ if we train only the last layer, ∂tft = −ηΣ · ∇fL(ft). (Training Last Layer) If L is the square loss against a ground truth function f∗, then ∇fL(ft) = 12k∇f‖ft − f ∗‖2 = 1 k (ft−f ∗), and the equations above become linear differential equations. However, typically we only have a training set X train ⊆ X of size far less than |X |. In this case, the loss function is effectively L(f) = 1 2|X train| ∑ x∈X train (f(x)− f∗(x))2, with functional gradient ∇fL(f) = 1 |X train| Dtrain · (f − f∗), where Dtrain is a diagonal matrix of size k × k whose diagonal is 1 on x ∈ X train and 0 else. Then our function still evolves linearly ∂tft = −η(K ·Dtrain) · (ft − f∗) (13) where K is the CK or the NTK depending on which parameters are trained. F.4 RELATIONSHIP TO GAUSSIAN PROCESS INFERENCE. Recall that the initial f0 in Eq. (13) is distributed as a Gaussian process N (0,Σ) in the infinite width limit. As Eq. (13) is a linear differential equation, the distribution of ft will remain a Gaussian process for all t, whether K is CK or NTK. Under suitable conditions, it can be shown that (Lee et al., 2019), in the limit as t→∞, if we train only the last layer, then the resulting function f∞ is distributed as a Gaussian process with mean f̄∞ given by f̄∞(x) = Σ(x,X train)Σ(X train,X train)−1f∗(X train) and kernel Var f∞ given by Var f∞(x, x ′) = Σ(x, x′)− Σ(x,X train)Σ(X train,X train)−1Σ(X train, x′). These formulas precisely described the posterior distribution of f given prior N (0,Σ) and data {(x, f∗(x))}x∈X train . If we train all layers, then similarly as t→∞, the function f∞ is distributed as a Gaussian process with mean f̄∞ given by (Lee et al., 2019) f̄∞(x) = Θ(x,X train)Θ(X train,X train)−1f∗(X train). This is, again, the mean of the Gaussian process posterior given prior N (0,Θ) and the training data {(x, f∗(x))}x∈X train . However, the kernel of f∞ is no longer the kernel of this posterior, but rather is an expression involving both the NTK Θ and the CK Σ; see Lee et al. (2019). In any case, we can make the following informal statement in the limit of large width Training the last layer (resp. all layers) of an MLP infinitely long, in expectation, yields the mean prediction of the GP inference given prior N (0,Σ) (resp. N (0,Θ)). G A BRIEF REVIEW OF HILBERT-SCHMIDT OPERATORS AND THEIR SPECTRAL THEORY In this section, we briefly review the theory of Hilbert-Schmidt kernels, and more importantly, to properly define the notion of eigenvalues and eigenfunctions. A function K : X 2 → R is called a Hilbert-Schmidt operator if K ∈ L2(X × X ), i.e. ‖K‖2HS def = E x,y∼X K(x, y)2 <∞. ‖K‖2HS is known as the Hilbert-Schmidt norm of K. K is called symmetric if K(x, y) = K(y, x) and positive definite (resp. semidefinite) if E x,y∼X f(x)K(x, y)f(y) > 0 (resp. ≥ 0) for all f ∈ L2(X ) not a.e. zero. A spectral theorem (Mercer’s theorem) holds for Hilbert-Schmidt operators. Fact G.1. If K is a symmetric positive semidefinite Hilbert-Schmidt kernel, then there is a sequence of scalars λi ≥ 0 (eigenvalues) and functions fi ∈ L2(X ) (eigenfunctions), for i ∈ N, such that ∀i, j, 〈fi, fj〉 = I(i = j), and K(x, y) = ∑ i∈N λifi(x)fi(y) where the convergence is in L2(X × X ) norm. This theorem allows us to speak of the eigenfunctions and eigenvalues, which are important for training and generalization considerations when K is a kernel used in machine learning, as discussed in the main text. A sufficient condition for K to be a Hilbert-Schmidt kernel in our case (concerning only probability measure on X ) is just that K is bounded. All Ks in this paper satisfy this property. H EIGENDECOMPOSITION OF NEURAL KERNEL ON DIFFERENT DOMAINS H.1 BOOLEAN CUBE From the Fourier Series Perspective. We continue from the discussion of the boolean cube in the main text. Recall that T∆ is the shift operator on functions that sends Φ(·) to Φ(· −∆). Notice that, if we let Φ(t) = eκt for some κ ∈ C, then T∆Φ(s) = e−κ∆ · eκt. Thus Φ is an “eigenfunction” of the operator T∆ with eigenvalue e−κ∆. In particular, this implies that Proposition H.1. Suppose Φ(t) = et/σ 2 , as in the case whenK is the CK or NTK of a 1-layer neural network with nonlinearity exp(·/σ), up to multiplicative constant (Fact F.3). Then the eigenvalue µk over the boolean cube d equals µk = 2 −d(1− exp(−∆/σ2))k(1 + exp(−∆/σ2))d−k · exp(1/σ2) where ∆ = 2/d. It would be nice if we can express any Φ as a linear combination of exponentials, so that Eq. (5) simplifies in the fashion of Prop H.1 — this is precisely the idea of Fourier series. We will use the theory of Fourier analysis on the circle, and for this we need to discuss periodic functions. Let Φ̃ : [−2, 2]→ R be defined as Φ̃(x) = Φ(x) if x ∈ [−1, 1] Φ(2− x) if x ∈ [1, 2] Φ(−2− x) if x ∈ [−2,−1]. See Fig. 14 for an example illustration. Note that if Φ is continuous on [−1, 1], then Φ̃ is continuous as a periodic function on [−2, 2]. The Fourier basis on functions over [−2, 2] is the collection {t 7→ e 12πist}s∈Z. Under generic conditions (for example if Ψ ∈ L2[−2, 2]), a function Ψ has an associated Fourier series ∑ s∈Z Ψ̂(s)e 1 2πist. We briefly review basic facts of Fourier analysis on the circle. Recall the following notion of functions of bounded variation. Definition H.2. A function f : [a, b]→ R is said to have bounded variation if sup P nP−1∑ i=0 |f(xi+1)− f(xi)| <∞, where the supremum is taken over all partitions P of the interval [a, b], P = {x0, . . . , xnP }, x0 ≤ x1 ≤ · · · ≤ xnP . Intuitively, a function of bounded variation has a graph (in [a, b]× R) of finite length. Fact H.3 (Katznelson (2004)). A bounded variation function f : [−2, 2]→ R that is periodic (i.e. f(−2) = f(2)) has a pointwise-convergent Fourier series: lim T→∞ ∑ s∈[−T,T ] Ψ̂(s)e 1 2πist → Ψ(t), ∀t ∈ [−2, 2]. From this fact easily follows the following lemma. Lemma H.4. Suppose Φ is continuous and has bounded variation on [−1, 1]. Then Φ̃ is also continuous and has bounded variation, and its Fourier Series (on [−2, 2]) converges pointwise to Φ̃. Proof. Φ̃ is obviously continuous and has bounded variation as well, and from Fact H.3, we know a periodic continuous function with bounded variation has a pointwise-convergent Fourier Series. Certainly, T∆ sends continuous bounded variation functions to continuous bounded variation functions. Because T∆e 1 2πist = e− 1 2πis∆e 1 2πist, T∆ ∑ s∈Z Ψ̂(s)e 1 2πist = ∑ s∈Z Ψ̂(s)e− 1 2πis∆e 1 2πist whenever both sides are well defined. If Ψ is continuous and has bounded variation then T∆Ψ is also continuous and has bounded variation, and thus its Fourier series, the RHS above, converges pointwise to T∆Ψ. Now, observe (I − T∆)k(I + T∆)d−kΦ̃(x) = d∑ r=0 Cd−k,kr Φ̃ (x− r∆) (I − T∆)k(I + T∆)d−kΦ̃(1) = d∑ r=0 Cd−k,kr Φ (( d 2 − r ) ∆ ) = µk Expressing the LHS in Fourier basis, we obtain Theorem H.5. µk = ∑ s∈Z is(1− e− 12πis∆)k(1 + e− 12πis∆)d−k ˆ̃Φ(s) where ˆ̃Φ(s) = 1 4 ∫ 2 −2 Φ̃(t)e− 1 2πist dt = 1 4 ∫ 1 −1 Φ(t)(e− 1 2πist + (−1)se 12πist) dt = { 1 2 ∫ 1 −1 Φ(t) cos( 1 2πst) dt if s is even − i2 ∫ 1 −1 Φ(t) sin( 1 2πst) dt if s is odd denote the Fourier coefficients of Φ̃ on [−2, 2]. (Here i is the imaginary unit here, not an index). Recovering the values of Φ given the eigenvalues µ0, . . . , µd. Conversely, given eigenvalues µ0, . . . , µd corresponding to each monomial degree, we can recover the entries of the matrix K. Theorem H.6. For any x, y ∈ d with Hamming distance r, K(x, y) = Φ (( d 2 − r ) ∆ ) = d∑ k=0 Cd−r,rk µk, where Cd−r,rk = ∑ j=0(−1)k+j ( d−r j )( r k−j ) as in Eq. (7). Proof. Recall that for any S ⊆ [d], χS(x) = xS is the Fourier basis corresponding to S (see Eq. (3)). Then by converting from the Fourier basis to the regular basis, we get Φ (( d 2 − r ) ∆ ) = K(x, y) for any x, y ∈ d with Hamming distance r = d∑ k=0 µk ∑ |S|=k χS(x)χS(y). If x and y differ on a set T ⊆ [d], then we can simplify the inner sum Φ (( d 2 − r ) ∆ ) = d∑ k=0 µk ∑ |S|=k (−1)|S∩T | = d∑ k=0 µkC d−r,r k . Remark H.7. If we let T be the operator that sends µ• 7→ µ•+1, then we have the following operator expression Φ (( d 2 − r ) ∆ ) = [(1 + T )d−r(1− T )rµ]0 Remark H.8. The above shows that the matrix C = {Cd−r,rk }dk,r=0 satisfies C2 = 2dI. H.2 SPHERE Now let’s consider the case when X = √ dSd−1 is the radius√ d sphere in Rd equipped with the uniform measure. Again, because x ∈ X all have the same norm, we will consider Φ as a univariate function with K(x, y) = Φ(〈x, y〉/‖x‖‖y‖) = Φ(〈x, y〉/d). As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), K is diagonalized by spherical harmonics. We review these results briefly below, as we will build on them to deduce spectral information of K on isotropic Gaussian distributions. Review: spherical harmonics and Gegenbauer polynomials. Spherical harmonics are L2 functions on Sd−1 that are eigenfunctions of the Laplace-Beltrami operator ∆Sd−1 of Sd−1. They can be described as the restriction of certain homogeneous polynomials in Rd to Sd−1. Denote byHd−1,(l) the space of spherical harmonics of degree l on sphere Sd−1. Then we have the orthogonal decomposition L2(Sd−1) ∼= ⊕∞ l=0Hd−1,(l). It is a standard fact that dimHd−1,(l) = ( d−1+l d−1 ) − ( d−3+l d−1 ) . There is a special class of spherical harmonics called zonal harmonics that can be represented as x 7→ p(〈x, y〉) for specific polynomials p : R→ R, and that possess a special reproducing property which we will describe shortly. Intuitively, the value of any zonal harmonics only depends on the “height” of x along some fixed axis y, so a typical zonal harmonics looks like Fig. 16. The polynomials pmust be one of the Gegenbauer polynomials. Gegenbauer polynomials {C(α)l (t)}∞l=0 are orthogonal polynomials with respect to the measure (1− t2)α− 12 on [−1, 1] (see Fig. 15 for examples), and here we adopt the convention that∫ 1 −1 C(α)n (t)C (α) l (t)(1− t 2)α− 1 2 dt = π21−2αΓ(n+ 2α) n!(n+ α)[Γ(α)]2 I(n = l). (14) Then for each (oriented) axis y ∈ Sd−1 and degree l, there is a unique zonal harmonic Zd−1,(l)y ∈ Hd−1,(l), Zd−1,(l)y (x) def = c−1d,lC ( d−22 ) l (〈x, y〉) for any x, y ∈ Sd−1, where cd,l = d−2d+2l−2 . Very importantly, they satisfy the following Fact H.9 (Reproducing property (Suetin)). For any f ∈ Hd−1,(m), E z∼Sd−1 Zd−1,(l)y (z)f(z) = f(y)I(l = m) E z∼Sd−1 Zd−1,(l)y (z)Z d−1,(m) x (z) = Z d−1,(l) y (x)I(l = m) = c −1 d,lC ( d−22 ) l (〈x, y〉)I(l = m) We also record a useful fact about Gegenbauer polynomials. Fact H.10 (Suetin). C (α) l (±1) = (±1) l ( l + 2α− 1 l ) By a result of Schoenberg (1942), we have the following eigendecomposition of K on the sphere. Theorem H.11 (Schoenberg). Suppose Φ : [−1, 1]→ R is in L2((1− t2) d−12 −1), so that it has the Gegenbauer expansion Φ(t) a.e. = ∞∑ l=0 alc −1 d,lC ( d−22 ) l (t). Then K has eigenspaces Hd−1,(l)√ d def = {f(x/ √ d) : f ∈ Hd−1,(l)} with corresponding eigenval- ues al.
1. What is the main contribution of the paper regarding neural networks' conjugate kernel and neural tangent kernel? 2. What are the strengths and weaknesses of the paper's analysis on boolean cube? 3. Do you have any concerns about the paper's claims and conclusions, particularly regarding the simplicity bias theorem? 4. How does the reviewer assess the significance and reliability of the experimental results provided in the paper? 5. What are the limitations of the paper's approach and conclusions compared to real-space applications?
Review
Review Aiming to resolve the question whether and why deep networks are biased towards simple functions, this paper gives a spectral analysis on neural networks' conjugate kernel(CK) and neural tangent kernel(NTK) on boolean cube. The eigenfunctions are identified and the eigenvalues are shown computable in polynomial time. Another main contribution of this paper is showing that the simplicity bias exists at least in a weak sense. I believe that this paper should be weakly rejected because it made more claims than what it can show in that the analysis doesn't work in real space, and the authors did not really show the simplicity bias. The following are my detailed comments. First, the whole analysis is based on boolean cube. Although the paper has shown empirically that in high dimension the uniform binary distribution is close enough to the uniform sphere distribution, it doesn't suffice to substitute boolean cube for sphere in real space. The spectral analysis in this paper is heavily due to working on boolean cube. The boolean cube is finite, which guarantees any inner-product kernel function $K(x,y) = \Phi (<x,y>)$ can be diagonalized by finite many monomial functions, And there are only O(d) different eigenvalues, which enables efficient computation. These techniques are not easy to be transferred to real space. The experiment shows that the first five eigenvalues in boolean cube, sphere, gaussian is close, but key problems here are first, in practive the dimension $d$ could be smaller and second, sphere and gaussian have infinitely many eigenvalues while boolean cube has $2^d$ eigenvalues. The experiment cannot really justify that all eigenvalues are close (only first several are shown), not to mention the tail eigenvalues over the first $2^d$-th. Even if we assume that boolean cube is a reasonable choice, we should notice the goal of computing eigenvalues is to eventually show the inductive bias toward 'simple functions'. However, the authors failed to show it at least from the following perspectives: 1) This paper did not show the trend of eigenvalues, but only the weak version of, for example, $\mu_{2k-2} > \mu_{2k}$. In the limiting case, it is more reasonable to fix dimension $d$ rather than the degree $k$. 2) Working on boolean cube leads to limited complexity. The most complicated base function is restricted to $\mathcal{X}_S$ where $S = \{1, 2, \dots, d\}$. So the weak simplicity bias theorem actually only describes the relation among finite $d$ eigenvalues. 3) No optimization arguments appear in this paper. Based on the spectral analysis, it is not rigorous enough to claim the networks are biased to simple functions, given that the target function consists of simple multilinear monomial functions. Since the boolean spectra is not a reliable measure, the further experiments under such a measure is therefore put under doubt. To summarize, this paper definitely contains some rigorous analysis which I appreciate, but it made some claims that are not verified. More importantly, the boolean cube is not the appropriate domain which is hard to generalize to real space and the simplicity bias theorem in this paper is to some extent weak. Therefore, I suggest rejecting this paper in its current form.
ICLR
Title A Fine-Grained Spectral Perspective on Neural Networks Abstract Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? How to set the range for learning rate tuning? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the Conjugate Kernel, CK, (also called the Neural Network-Gaussian Process Kernel), and the Neural Tangent Kernel, NTK. Roughly, the CK and the NTK tell us respectively “what a network looks like at initialization” and “what a network looks like during and after training.” Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks. We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at github.com/jxVmnLgedVwv6mNcGCBy/NNspectra. 1 INTRODUCTION Understanding the behavior of neural networks and why they generalize has been a central pursuit of the theoretical deep learning community. Recently, Valle-Pérez et al. (2018) observed that neural networks have a certain “simplicity bias” and proposed this as a solution to the generalization question. One of the ways with which they argued that this bias exists is the following experiment: they drew a large sample of boolean functions by randomly initializing neural networks and thresholding the output. They observed that there is a bias toward some "simple" functions which get sampled disproportionately more often. However, their experiments were only done for relu networks. Can one expect this “simplicity bias” to hold universally, for any architecture? A priori, this seems difficult, as the nonlinear nature seems to present an obstacle in reasoning about the distribution of random networks. However, this question turns out to be more easily treated if we allow the width to go to infinity. A long line of works starting with Neal (1995) and extended recently by Lee et al. (2018); Novak et al. (2018); Yang (2019) have shown that randomly initialized, infinite-width networks are distributed as Gaussian processes. These Gaussian processes also describe finite width random networks well (Valle-Pérez et al., 2018). We will refer to the corresponding kernels as the Conjugate Kernels (CK), following the terminology of Daniely et al. (2016). Given the CK K, the simplicity bias of a wide neural network can be read off quickly from the spectrum of K: If the largest eigenvalue of K accounts for most of trK, then a typical random network looks like a function from the top eigenspace of K. In this paper, we will use this spectral perspective to probe not only the simplicity bias, but more generally, questions regarding how hyperparameters affect the generalization of neural networks. Via the usual connection between Gaussian processes and linear models with features, the CK can be thought of as the kernel matrix associated to training only the last layer of a wide randomly initialized network. It is a remarkable recent advance (Jacot et al., 2018; Allen-Zhu et al., 2018a;c; Du et al., 2018) that, under a certain regime, a wide neural network of any depth evolves like a linear model even when training all parameters. The associated kernel is call the Neural Tangent Kernel, which is typically different from CK. While its theory was initially derived in the infinite width setting, Lee et al. (2019) confirmed with extensive experiment that this limit is predictive of finite width neural networks as well. Thus, just as the CK reveals information about what a network looks like at initialization, NTK reveals information about what a network looks like after training. As such, if we can understand how hyperparameters change the NTK, we can also hope to understand how they affect the performance of the corresponding finite-width network. Our Contributions In this paper, in addition to showing that the simplicity bias is not universal, we will attempt a first step at understanding the effects of the hyperparameters on generalization from a spectral perspective. At the foundation is a spectral theory of the CK and the NTK on the boolean cube. In Section 3, we show that these kernels, as integral operators on functions over the boolean cube, are diagonalized by the natural Fourier basis, echoing similar results for over the sphere (Smola et al., 2001). We also partially diagonalize the kernels over standard Gaussian, and show that, as expected, the kernels over the different distributions (boolean cube, sphere, standard Gaussian) behave very similarly in high dimensions. However, the spectrum is much easier to compute over the boolean cube: while the sphere and Gaussian eigenvalues would require integration against a kind of polynomials known as the Gegenbauer polynomials, the boolean ones only require calculating a linear combination of a small number of terms. For this reason, in the rest of the paper we focus on analyzing the eigenvalues over the boolean cube. Just as the usual Fourier basis over R has a notion of frequency that can be interpreted as a measure of complexity, so does the boolean Fourier basis (this is just the degree; see Section 3.1). While not perfect, we adopt this natural notion of complexity in this work; a “simple” function is then one that is well approximated by “low frequencies.” This spectral perspective immediately yields that the simplicity bias is not universal (Section 4). In particular, while it seems to hold more or less for relu networks, for sigmoidal networks, the simplicity bias can be made arbitrarily weak by changing the weight variance and the depth. In the extreme case, the random function obtained from sampling a deep erf network with large weights is distributed like a “white noise.” However, there is a very weak sense in which the simplicity bias does hold: the eigenvalues of more “complex” eigenspaces cannot be bigger than those of less “complex” eigenspaces (Thm 4.1). Next, we examine how hyperparameters affect the performance of neural networks through the lens of NTK and its spectrum. To do so, we first need to understand the simpler question of how a kernel affects the accuracy of the function learned by kernel regression. A coarse-grained theory, concerned with big-O asymptotics, exists from classical kernel literature (Yao et al., 2007; Raskutti et al., 2013; Wei et al.; Lin and Rosasco; Schölkopf and Smola, 2002). However, the fine-grained details, required for discerning the effect of hyperparameters, have been much less studied. We make a first attempt at a heuristic, fractional variance (i.e. what fraction of the trace of the kernel does an eigenspace contribute), for understanding how a minute change in kernel effects a change in performance. Intuitively, if an eigenspace has very large fractional variance, so that it accounts for most of the trace, then a ground truth function from this eigenspace should be very easy to learn. Using this heuristic, we make two predictions about neural networks, motivated by observations in the spectra of NTK and CK, and verify them with extensive experiments. • Deeper networks learn more complex features, but excess depth can be detrimental as well. Spectrally, depth can increase fractional variance of an eigenspace, but past an optimal depth, it will also decrease it. (Section 5) Thus, deeper is not always better. • Training all layers is better than training just the last layer when it comes to more complex features, but the opposite is true for simpler features. Spectrally, fractional variances of more “complex” eigenspaces for the NTK are larger than the correponding quantities of the CK. (Section 6) Finally, we use our spectral theory to predict the maximal nondiverging learning rate (“max learning rate”) of SGD (Section 7). In general, we will not only verify our theory with experiments on the theoretically interesting distributions, i.e. uniform measures over the boolean cube and the sphere, or the standard Gaussian, but also confirm these findings on real data like MNIST and CIFAR10 1. 1The code for computing the eigenvalues and for reproducing the plots of this paper is available at github. com/jxVmnLgedVwv6mNcGCBy/NNspectra, which will be open sourced upon publication. For space concerns, we review relevant literature along the flow of the main text, and relegate a more complete discussion of the related research landscape in Appendix A. 2 KERNELS ASSOCIATED TO NEURAL NETWORKS As mentioned in the introduction, we now know several kernels associated to infinite width, randomly initialized neural networks. The most prominent of these are the neural tangent kernel (NTK) (Jacot et al., 2018) and the conjugate kernel (CK) (Daniely et al., 2016), which is also called the NNGP kernel (Lee et al., 2018). We briefly review them below. First we introduce the following notation that we will repeatedly use. Definition 2.1. For φ : R→ R, write Vφ for the function that takes a PSD (positive semidefinite) kernel function to a PSD kernel of the same domain by the formula Vφ(K)(x, x ′) = E f∼N (0,K) φ(f(x))φ(f(x′)). Conjugate Kernel Neural networks are commonly thought of as learning a high-quality embedding of inputs to the latent space represented by the network’s last hidden layer, and then using its final linear layer to read out a classification given the embedding. The conjugate kernel is just the kernel associated to the embedding induced by a random initialization of the neural network. Consider an MLP with widths {nl}l, weight matrices {W l ∈ Rn l×nl−1}l, and biases {bl ∈ Rn l}l, l = 1, . . . , L. For simplicity of exposition, in this paper, we will only consider scalar output nL = 1. Suppose it is parametrized by the NTK parametrization, i.e. its computation is given recursively as h1(x) = σw√ n0 W 1x+ σbb 1 and hl(x) = σw√ nl−1 W lφ(hl−1(x)) + σbb l (MLP) with some hyperparameters σw, σb that are fixed throughout training2. At initialization time, suppose W lαβ , b l α ∼ N (0, 1) for each α ∈ [nl], β ∈ [nl−1]. It can be shown that, for each α ∈ [nl], hlα is a Gaussian process with zero mean and kernel function Σl in the limit as all hidden layers become infinitely wide (nl →∞, l = 1, . . . , L− 1), where Σl is defined inductively on l as Σ1(x, x′) def = σ2w(n 0)−1〈x, x′〉+ σ2b , Σl def = σ2wVφ(Σ l−1) + σ2b (CK) The kernel ΣL corresponding the the last layer L is the network’s conjugate kernel, and the associated Gaussian process limit is the reason for its alternative name Neural Network-Gaussian process kernel. In short, if we were to train a linear model with features given by the embedding x 7→ hL−1(x) when the network parameters are randomly sampled as above, then the CK is the kernel of this linear model. See Daniely et al. (2016); Lee et al. (2018) and Appendix F for more details. Neural Tangent Kernel On the other hand, the NTK corresponds to training the entire model instead of just the last layer. Intuitively, if we let θ be the entire set of parameters {W l}l ∪ {bl}l of Eq. (MLP), then for θ close to its initialized value θ0, we expect hL(x; θ)− hL(x; θ0) ≈ 〈∇θhL(x; θ0), θ − θ0〉 via a naive first-order Taylor expansion. In other words, hL(x; θ)− hL(x; θ0) behaves like a linear model with feature of x given by the gradient taken w.r.t. the initial network, ∇θhL(x; θ0), and the weights of this linear model are the deviation θ− θ0 of θ from its initial value. It turns out that, in the limit as all hidden layer widths tend to infinity, this intuition is correct (Jacot et al., 2018; Lee et al., 2018; Yang, 2019), and the following inductive formula computes the corresponding infinite-width kernel of this linear model: Θ1 def = Σ1, Θl(x, x′) def = Σl(x, x′) + σ2wΘ l−1(x, x′)Vφ′(Σ l−1)(x, x′). (NTK) Computing CK and NTK While in general, computing Vφ and Vφ′ requires evaluating a multivariate Gaussian expectation, in specific cases, such as when φ = relu or erf , there exists explicit, efficient formulas that only require pointwise evaluation of some simple functions (see Facts F.1 and F.2). This allows us to evaluate CK and NTK on a set X of inputs in only time O(|X |2L). 2SGD with learning rate α in this parametrization is roughly equivalent to SGD with learning rate α/width in the standard parametrization with Glorot initialization; see Lee et al. (2018) What Do the Spectra of CK and NTK Tell Us? In summary, the CK governs the distribution of a randomly initialized neural network and also the properties of training only the last layer of a network, while the NTK governs the dynamics of training (all parameters of) a neural network. A study of their spectra thus informs us of the “implicit prior” of a randomly initialized neural network as well as the “implicit bias” of GD in the context of training neural networks. In regards to the implicit prior at initialization, we know from Lee et al. (2018) that a randomly initialized network as in Eq. (MLP) is distributed as a Gaussian process N (0,K), where K is the corresponding CK, in the infinite-width limit. If we have the eigendecomposition K = ∑ i≥1 λiui ⊗ ui (1) with eigenvalues λi in decreasing order and corresponding eigenfunctions ui, then each sample from this GP can be obtained as ∑ i≥1 √ λiωiui, ωi ∼ N (0, 1). If, for example, λ1 ∑ i≥2 λi, then a typical sample function is just a very small perturbation of u1. We will see that for relu, this is indeed the case (Section 4), and this explains the “simplicity bias” in relu networks found by Valle-Pérez et al. (2018). Training the last layer of a randomly initialized network via full batch gradient descent for an infinite amount of time corresponds to Gaussian process inference with kernel K (Lee et al., 2018; 2019). A similar intuition holds for NTK: training all parameters of the network (Eq. (MLP)) for an infinite amount of time yields the mean prediction of the GPN (0,NTK) in expectation; see Lee et al. (2019) and Appendix F.4 for more discussion. Thus, the more the GP prior (governed by the CK or the NTK) is consistent with the ground truth function f∗, the more we expect the Gaussian process inference and GD training to generalize well. We can measure this consistency in the “alignment” between the eigenvalues λi and the squared coefficients a2i of f ∗’s expansion in the {ui}i basis. The former can be interpreted as the expected magnitude (squared) of the ui-component of a sample f ∼ N (0,K), and the latter can be interpreted as the actual magnitude squared of such component of f∗. In this paper, we will investigate an even cleaner setting where f∗ = ui is an eigenfunction. Thus we would hope to use a kernel whose ith eigenvalue λi is as large as possible. Neural Kernels From the forms of the equation Eqs. (CK) and (NTK) and the fact that Vφ(K)(x, x ′) only depends on K(x, x),K(x, x′), and K(x′, x′), we see that CK or NTK of MLPs takes the form K(x, y) = Φ ( 〈x, y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) (2) for some function Φ : R3 → R. We will refer to this kind of kernel as Neural Kernel in this paper. Kernels as Integral Operators We will consider input spaces of various forms X ⊆ Rd equipped with some probability measure. Then a kernel function K acts as an integral operator on functions f ∈ L2(X ) by Kf(x) = (Kf)(x) = E y∼X K(x, y)f(y). We will use the “juxtaposition syntax” Kf to denote this application of the integral operator. 3 Under certain assumptions, it then makes sense to speak of the eigenvalues and eigenfunctions of the integral operator K. While we will appeal to an intuitive understanding of eigenvalues and eigenfunctions in the main text below, we include a more formal discussion of Hilbert-Schmidt operators and their spectral theory in Appendix G for completeness. In the next section, we investigate the eigendecomposition of neural kernels as integral operators over different distributions. 3In cases when X is finite, K can be also thought of as a big matrix and f as a vector — but do not confuse Kf with their multiplication! If we use · to denote matrix multiplication, then the operator application Kf is the same as the matrix multiplication K ·D · f where D is the diagonal matrix encoding the probability values of each point in X . 3 THE SPECTRA OF NEURAL KERNELS 3.1 BOOLEAN CUBE We first consider a neural kernelK on the boolean cubeX = ddef= {±1}d, equipped with the uniform measure. In this case, since each x ∈ X has the same norm, K(x, y) = Φ ( 〈x,y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) effectively only depends on 〈x, y〉, so we will treat Φ as a single variate function in this section, Φ(c) = Φ(c, 1, 1). Brief review of basic Fourier analysis on the boolean cube d (O’Donnell (2014)). The space of real functions on d forms a 2d-dimensional space. Any such function has a unique expansion into a multilinear polynomial (polynomials whose monomials do not contain xpi , p ≥ 2, of any variable xi). For example, the majority function over 3 bits has the following unique multilinear expansion maj3 : 3 → 1, maj3(x1, x2, x3) = 1 2 (x1 + x2 + x3 − x1x2x3). In the language of Fourier analysis, the 2d multilinear monomial functions χS(x) def = xS def = ∏ i∈S xi, for each S ⊆ [d] (3) form a Fourier basis of the function space L2( d) = {f : d → R}, in the sense that their inner products satisfy E x∼ d χS(x)χT (x) = I(S = T ). Thus, any function f : d → R can be always written as f(x) = ∑ S⊆[d] f̂(S)χX(x) for a unique set of coefficients {f̂(S)}S⊆[d]. It turns out that K is always diagonalized by this Fourier basis {χS}S⊆[d]. Theorem 3.1. On the d-dimensional boolean cube d, for every S ⊆ [d], χS is an eigenfunction of K with eigenvalue µ|S| def = E x∈ d xSK(x,1) = E x∈ d xSΦ (∑ i xi/d ) , (4) where 1 = (1, . . . , 1) ∈ d. This definition of µ|S| does not depend on the choice S, only on the cardinality of S. These are all of the eigenfunctions of K by dimensionality considerations.4 Define T∆ to be the shift operator on functions over [−1, 1] that sends Φ(·) to Φ(· −∆). Then we can re-express the eigenvalue as follows. Lemma 3.2. With µk as in Thm 3.1, µk = 2 −d(I − T∆)k(I + T∆)d−kΦ(1) (5) = 2−d d∑ r=0 Cd−k,kr Φ (( d 2 − r ) ∆ ) (6) where Cd−k,kr def = ∑ j=0 (−1)r+j ( d− k j )( k r − j ) . (7) Eq. (5) will be important for computational purposes, and we will come back to discuss this more in Section 3.5. It also turns out µk affords a pretty expression via the Fourier series coefficients of Φ. As this is not essential to the main text, we relegate its exposition to Appendix H.1. 4Readers familiar with boolean Fourier analysis may be reminded of the noise operator Tρ, ρ ≤ 1 (O’Donnell, 2014, Defn 2.46). In the language of this work, Tρ is a neural kernel with eigenvalues µk = ρk. 3.2 SPHERE Now let’s consider the case when X = √ dSd−1 is the radius√ d sphere in Rd equipped with the uniform measure. Again, because x ∈ X all have the same norm, we will treat Φ as a univariate function with K(x, y) = Φ(〈x, y〉/‖x‖‖y‖) = Φ(〈x, y〉/d). As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), K is diagonalized by spherical harmonics, and the eigenvalues are given by the coefficients of Φ against a system of orthogonal polynomials called Gegenbuaer polynomials. We relegate a complete review of this topic to Appendix H.2. 3.3 ISOTROPIC GAUSSIAN Now let’s consider X = Rd equipped with standard isotropic Gaussian N (0, I), so that K behaves like Kf(x) = E y∼N (0,I) K(x, y)f(y) = E y∼N (0,I) Φ ( 〈x, y〉 ‖x‖‖y‖ , ‖x‖2 d , ‖y‖2 d ) f(y) for any f ∈ L2(N (0, I)). In contrast to the previous two sections, K will essentially depend on the effect of the norms ‖x‖ and ‖y‖ on Φ. Nevertheless, because an isotropic Gaussian vector can be obtained by sampling its direction uniformly from the sphere and its magnitude from a chi distribution, K can still be partially diagonalized into a sum of products between spherical harmonics and kernels on R equipped with a chi distribution (Thm H.14). In certain cases, we can obtain complete eigendecompositions, for example when Φ is positive homogeneous. See Appendix H.3 for more details. 3.4 KERNEL IS SAME OVER BOOLEAN CUBE, SPHERE, OR GAUSSIAN WHEN d 1 The reason we have curtailed a detailed discussion of neural kernels on the sphere and on the standard Gaussian is because, in high dimension, the kernel behaves the same under these distributions as under uniform distribution over the boolean cube. Indeed, by intuition along the lines of the central limit theorem, we expect that uniform distribution over a high dimension boolean cube should approximate high dimensional standard Gaussian. Similarly, by concentration of measure, most of the mass of a Gaussian is concentrated around a thin shell of radius √ d. Thus, morally, we expect the same kernel function K induces approximately the same integral operator on these three distributions in high dimension, and as such, their eigenvalues should also approximately coincide. We verify empirically and theoretically this is indeed the case in Appendix H.4. 3.5 COMPUTING THE EIGENVALUES As the eigenvalues of K over the different distributions are very close, we will focus in the rest of this paper on eigenvalues over the boolean cube. This has the additional benefit of being much easier to compute. Each eigenvalue over the sphere and the standard Gaussian requires an integration of Φ against a Gegenbauer polynomial. In high dimension d, these Gegenbauer polynomials varies wildly in a sinusoidal fashion, and blows up toward the boundary (see Fig. 15 in the Appendix). As such, it is difficult to obtain a numerically stable estimate of this integral in an efficient manner when d is large. In contrast, we have multiple ways of computing boolean cube eigenvalues, via Eqs. (5) and (6). In either case, we just take some linear combination of the values of Φ at a grid of points on [−1, 1], spaced apart by ∆ = 2/d. While the coefficients Cd−k,kr (defined in Eq. (7)) are relatively efficient to compute, the change in the sign of Cd−k,kr makes this procedure numerically unstable for large d. Instead, we use Eq. (5) to isolate the alternating part to evaluate in a numerically stable way: Since µk = ( I+T∆ 2 )d−k ( I−T∆ 2 )k Φ(1), we can evaluate Φ̃ def= ( I−T∆ 2 )k Φ via k finite differences, and then compute ( I + T∆ 2 )d−k Φ̃(1) = 1 2d−k d−k∑ r=0 ( d− k r ) Φ̃(1− r∆). (8) When Φ arises from the CK or the NTK of an MLP, all derivatives of Φ at 0 are nonnegative (Thm I.3). Thus intuitively, the finite difference Φ̃ should be also all nonnegative, and this sum can be evaluated without worry about floating point errors from cancellation of large terms. A slightly more clever way to improve the numerical stability when 2k ≤ d is to note that (I + T∆)d−k (I − T∆)k Φ(1) = (I + T∆)d−2k ( I − T 2∆ )k Φ(1) = (I + T∆)d−2k (I − T2∆)k Φ(1). So an improved algorithm is to first compute the kth finite difference (I − T2∆)k with the larger step size 2∆, then compute the sum (I + T∆)d−2k as in Eq. (8). 4 CLARIFYING THE “SIMPLICITY BIAS” OF RANDOM NEURAL NETWORKS As mentioned in the introduction, Valle-Pérez et al. (2018) claims that neural networks are biased toward simple functions. We show that this phenomenon depends crucially on the nonlinearity, the sampling variances, and the depth of the network. In Fig. 1(a), we have repeated their experiment for 104 random functions obtained by sampling relu neural networks with 2 hidden layers, 40 neurons each, following Valle-Pérez et al. (2018)’s architectural choices5. We also do the same for erf networks of the same depth and width, varying as well the sampling variances of the weights and biases, as shown in the legend. As discussed in Valle-Pérez et al. (2018), for relu, there is indeed this bias, where a single function gets sampled more than 10% of the time. However, for erf, as we increase σ2w, we see this bias disappear, and every function in the sample gets sampled only once. This phenomenon can be explained by looking at the eigendecomposition of the CK, which is the Gaussian process kernel of the distribution of the random networks as their hidden widths tend to infinity. In Fig. 1(b), we plot the normalized eigenvalues {µk/ ∑7 i=0 ( 7 i ) µi}7k=0 for the CKs corresponding to the networks sampled in Fig. 1(a). Immediately, we see that for relu and σ2w = σ 2 b = 2, the degree 0 eigenspace, corresponding to constant functions, accounts for more than 80% of the variance. This means that a typical infinite-width relu network of 2 layers is expected to be almost constant, and this should be even more true after we threshold the network to be a boolean function. On the other hand, for erf and σb = 0, the even degree µks all vanish, and most of the variance comes from degree 1 components (i.e. linear functions). This concentration in degree 1 also lessens as σ2w increases. But because this variance is spread across a dimension 7 eigenspace, we don’t see duplicate function samples nearly as much as in the relu case. As σw increases, we also see the eigenvalues become more equally distributed, which corresponds to the flattening of 5Valle-Pérez et al. (2018) actually performed their experiments over the {0, 1}7 cube, not the {±1}7 cube we are using here. This does not affect our conclusion. See Appendix J for more discussion the probability-vs-rank curve in Fig. 1(a). Finally, we observe that a 32-layer erf network with σ2w = 4 has all its nonzero eigenvalues (associated to odd degrees) all equal (see points marked by ∗ in Fig. 1(b)). This means that its distribution is a "white noise" on the space of odd functions, and the distribution of boolean functions obtained by thresholding the Gaussian process samples is the uniform distribution on odd functions. This is the complete lack of simplicity bias modulo the oddness constraint. However, from the spectral perspective, there is a weak sense in which a simplicity bias holds for all neural network-induced CKs and NTKs. Theorem 4.1 (Weak Spectral Simplicity Bias). Let K be the CK or NTK of an MLP on a boolean cube d. Then the eigenvalues µk, k = 0, . . . , d, satisfy µ0 ≥ µ2 ≥ · · · ≥ µ2k ≥ · · · , µ1 ≥ µ3 ≥ · · · ≥ µ2k+1 ≥ · · · . (9) Even though it’s not true that the fraction of variance contributed by the degree k eigenspace is decreasing with k, the eigenvalue themselves will be in a nonincreasing pattern across even and odd degrees. In fact, if we fix k and let d→∞, then we can show that (Thm I.6) µk = Θ(d −k). Of course, as we have seen, this is a very weak sense of simplicity bias, as it doesn’t prevent “white noise” behavior as in the case of erf CK with large σ2w and large depth. 5 DEEPER NETWORKS LEARN MORE COMPLEX FEATURES In the rest of this work, we compute the eigenvalues µk over the 128-dimensional boolean cube ( d, with d = 128) for a large number of different hyperparameters, and analyze how the latter affect the former. We vary the degree k ∈ [0, 8], the nonlinearity between relu and erf, the depth (number of hidden layers) from 1 to 128, and σ2b ∈ [0, 4]. We fix σ2w = 2 for relu kernels, but additionally vary σ2w ∈ [1, 5] for erf kernels. Comprehensive contour plots of how these hyperparameters affect the kernels are included in Appendix D, but in the main text we summarize several trends we see. We will primarily measure the change in the spectrum by the degree k fractional variance, which is just degree k fractional variance def= ( d k ) µk∑d i=0 ( d i ) µi . This terminology comes from the fact that, if we were to sample a function f from a Gaussian process with kernel K, then we expect that r% of the total variance of f comes from degree k components of f , where r% is the degree k fractional variance. If we were to try to learn a homogeneous degree-k polynomial using a kernel K, intuitively we should try to choose K such that its µk is maximized, relative to other eigenvalues. Fig. 3(a) shows that this is indeed the case even with neural networks: over a large number of different hyperparameter settings, degree k fractional variance is inversely related to the validation loss incurred when learning a degree k polynomial. However, this plot also shows that there does not seem like a precise, clean relationship between fractional variance and validation loss. Obtaining a better measure for predicting generalization is left for future work. Before we continue, we remark that the fractional variance of a fixed degree k converges to a fixed value as the input dimension d→∞: Theorem 5.1 (Asymptotic Fractional Variance). Let K be the CK or NTK of an MLP on a boolean cube d. ThenK can be expressed asK(x, y) = Φ(〈x, y〉/d) for some analytic function Φ : R→ R. If we fix k and let the input dimension d→∞, then the fractional variance of degree k converges to (k!)−1Φ(k)(0)/Φ(1) = (k!)−1Φ(k)(0)∑ j≥0(j!) −1Φ(j)(0) where Φ(k) denotes the kth derivative of Φ. For the fractional variances we compute in this paper, their values at d = 128 are already very close to their d→∞ limit, so we focus on the d = 128 case experimentally. If K were to be the CK or NTK of a relu or erf MLP, then we find that for higher k, the depth of the network helps increase the degree k fractional variance. In Fig. 2(a) and (b), we plot, for each degree k, the depth that (with some combination of other hyperparameters like σ2b ) achieves this maximum, for respectively relu and erf kernels. Clearly, the maximizing depths are increasing with k for relu, and also for erf when considering either odd k or even k only. The slightly differing behavior between even and odd k is expected, as seen in the form of Thm 4.1. Note the different scales of y-axes for relu and erf — the depth effect is much stronger for erf than relu. For relu NTK and CK, σ2b = 0 maximizes fractional variance in general, and the same holds for erf NTK and CK in the odd degrees (see Appendix D). In Fig. 2(c) and Fig. 2(d) we give a more fine-grained look at the σ2b = 0 slice, via heatmaps of fractional variance against degree and depth. Brighter color indicates higher variance, and we see the optimal depth for each degree k clearly increases with k for relu NTK, and likewise for odd degrees of erf NTK. However, note that as k increases, the difference between the maximal fractional variance and those slightly suboptimal becomes smaller and smaller, reflected by suppressed range of color moving to the right. The heatmaps for relu and erf CKs look similar and are omitted. We verify this increase of optimal depth with degree in Fig. 3(b). There we have trained relu networks of varying depth against a ground truth multilinear polynomial of varying degree. We see clearly that the optimal depth is increasing with degree. We also verify this phenomenon when the input distribution changes to the standard Gaussian or the uniform distribution over the sphere √ dSd−1; see Fig. 6. Note that implicit in our results here is a highly nontrivial observation: Past some point (the optimal depth), high depth can be detrimental to the performance of the network, beyond just the difficulty to train, and this detriment can already be seen in the corresponding NTK or CK. In particular, it’s not true that the optimal depth is infinite. We confirm the existence of such an optimal depth even in real distributions like MNIST and CIFAR10; see Fig. 7. This adds significant nuance to the folk wisdom that “depth increases expressivity and allows neural networks to learn more complex features.” 6 NTK FAVORS MORE COMPLEX FEATURES THAN CK We generally find the degree k fractional variance of NTK to be higher than that of CK when k is large, and vice versa when k is small, as shown in Fig. 4. This means that, if we train only the last layer of a neural network (i.e. CK dynamics), we intuitively should expect to learn simpler features better, while, if we train all parameters of the network (i.e. NTK dynamics), we should expect to learn more complex features better. Similarly, if we were to sample a function from a Gaussian process with the CK as kernel (recall this is just the distribution of randomly initialized infinite width MLPs (Lee et al., 2018)), this function is more likely to be accurately approximated by low degree polynomials than the same with the NTK. We verify this intuition by training a large number of neural networks against ground truth functions of various homogeneous polynomials of different degrees, and show a scatterplot of how training the last layer only measures against training all layers (Fig. 3(c)). This phenomenon remains true over the standard Gaussian or the uniform distribution on the sphere (Fig. 8). Consistent with our theory, the only place training the last layer works meaningfully better than training all layers is when the ground truth is a constant function. However, we reiterate that fractional variance is an imperfect indicator of performance. Even though for erf neural networks and k ≥ 1, degree k fractional variance of NTK is not always greater than that of the CK, we do not see any instance where training the last layer of an erf network is better than training all layers. We leave an investigation of this discrepancy to future work. 7 PREDICTING THE MAXIMUM LEARNING RATE In any setup that tries to push deep learning benchmarks, learning rate tuning is a painful but indispensable part. In this section, we show that our spectral theory can accurately predict the maximal nondiverging learning rate over real datasets as well as toy input distributions, which would help set the correct upper limit for a learning rate search. By Jacot et al. (2018), in the limit of large width and infinite data, the function g : X → R represented by our neural network evolves like gt+1 = gt − 2αK(gt − g∗), t = 0, 1, 2, . . . , (10) when trained under full batch GD (with the entire population) with L2 loss L(f, g) = Ex∼X (f(x)− g(x))2, ground truth g∗, and learning rate α, starting from randomly initialization. If we train only the last layer, then K is the CK; if we train all layers, then K is the NTK. Given an eigendecomposition of K as in Eq. (1), if g0 − g∗ = ∑ i aiui is the decomposition of g 0 in the eigenbasis {ui}i, then one can easily deduce that gt − g∗ = ∑ i ai(1− 2αλi)tui. Consequently, we must have α < (maxi λi) −1 in order for Eq. (10) to converge 6 When the input distribution is the uniform distribution over d, the maximum learning rate is max(µ0, µ1) by Thm 4.1. By Thm 5.1, as long as the Φ function corresonding to K has Φ(0) 6= 0, when d is large, we expect µ0 ≈ Φ(0) but µ1 ∼ d−1Φ′(0) µ0. Therefore, we should predict 1Φ(0) for the maximal learning rate when training on the boolean cube. However, as Fig. 5 shows, this prediction is accurate not only for the boolean cube, but also over the sphere, the standard Gaussian, and even MNIST and CIFAR10! 8 CONCLUSION In this work, we have taken a first step at studying how hyperparameters change the initial distribution and the generalization properties of neural networks through the lens of neural kernels and their spectra. We obtained interesting insights by computing kernel eigenvalues over the boolean cube and relating them to generalization through the fractional variance heuristic. While it inspired valid predictions that are backed up by experiments, fractional variance is clearly just a rough indicator. We hope future work can refine on this idea to produce a much more precise prediction of test loss. Nevertheless, we believe the spectral perspective is the right line of research that will not only shed light on mysteries in deep learning but also inform design choices in practice. A RELATED WORKS The Gaussian process behavior of neural networks was found by Neal (1995) for shallow networks and then extended over the years to different settings and architectures (Williams, 1997; Le Roux and Bengio, 2007; Hazan and Jaakkola, 2015; Daniely et al., 2016; Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018). This connection was exploited implicitly or explicitly to build new models (Cho and Saul, 2009; Lawrence and Moore, 2007; Damianou and Lawrence, 2013; Wilson et al., 2016a;b; Bradshaw et al., 2017; van der Wilk et al., 2017; Kumar et al., 2018; Blomqvist et al., 2018; Borovykh, 2018; Garriga-Alonso et al., 2018; Novak et al., 2018; Lee et al., 2018). The Neural Tangent Kernel is a much more recent discovery by Jacot et al. (2018) and later Allen-Zhu et al. (2018a;c;b); Du et al. (2018); Arora et al. (2019b); Zou et al. (2018) came upon the same reasoning independently. Like CK, NTK has also been applied toward building new models or algorithms (Arora et al., 2019a; Achiam et al., 2019). Closely related to the discussion of CK and NTK is the signal propagation literature, which tries to understand how to prevent pathological behaviors in randomly initialized neural networks when they are deep (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017; 2018; Hanin, 2018; Hanin and Rolnick, 2018; Chen et al., 2018; Yang et al., 2019; Pennington et al., 2017a; Hayou et al., 2018; Philipp and Carbonell, 2018). This line of work can trace its original at least to the advent of the Glorot and He initialization schemes for deep networks (Glorot and Bengio, 2010; He et al., 2015). The investigation of forward signal propagation, or how random neural networks change with depth, corresponds to studying the infinite-depth limit of CK, and the investigation of backward signal propagation, or how gradients of random networks change with depth, corresponds to studying the infinite-depth limit of NTK. Some of the quite remarkable results from this literature includes how to train a 10,000 layer CNN (Xiao et al., 2017) and that, counterintuitively, batch normalization causes gradient explosion (Yang et al., 2019). This signal propagation perspective can be refined via random matrix theory (Pennington et al., 2017a; 2018). In these works, free probability is leveraged to compute the singular value distribution of the input-output map given by the random neural network, as the input dimension and width tend to infinity together. Other works also investigate various questions of neural network training and generalization from the random matrix perspective (Pennington and Worah, 2017; Pennington and Bahri, 2017; Pennington and Worah, 2018). Yang (2019) presents a common framework, known as Tensor Programs, unifying the GP, NTK, signal propagation, and random matrix perspectives, as well as extending them to new scenarios, like recurrent neural networks. It proves the existence of and allows the computation of a large number of infinite-width limits (including ones relevant to the above perspectives) by expressing the quantity of interest as the output of a computation graph and then manipulating the graph mechanically. Several other works also adopt a spectral perspective on neural networks (Candès, 1999; Sonoda and Murata, 2017; Eldan and Shamir, 2016; Barron, 1993; Xu et al., 2018; Zhang et al., 2019; Xu et al., 2019; Xu, 2018); here we highlight a few most relevant to us. Rahaman et al. (2018) studies the real Fourier frequencies of relu networks and perform experiments on real data as well as synthetic ones. They convincingly show that relu networks learn low frequencies components first. They also investigate the subtleties when the data manifold is low dimensional and embedded in various ways in the input space. In contrast, our work focuses on the spectra of the CK and NTK (which indirectly informs the Fourier frequencies of a typical network). Nevertheless, our results are complementary to theirs, as they readily explain the low frequency bias in relu that they found. Karakida et al. (2018) studies the spectrum of the Fisher information matrix, which share the nonzero eigenvalues with the NTK. They compute the mean, variance, and maximum of the eigenvalues Fisher eigenvalues (taking the width to infinity first, and then considering finite amount of data sampled iid from a Gaussian). In comparison, our spectral results yield all eigenvalues of the NTK (and thus also all nonzero eigenvalues of the Fisher) as well as eigenfunctions. Finally, we note that several recent works (Xie et al., 2016; Bietti and Mairal, 2019; Basri et al., 2019; Ghorbani et al., 2019) studied one-hidden layer neural networks over the sphere, building on Smola et al. (2001)’s observation that spherical harmonics diagonalize dot product kernels, with the latter two concurrent to us. This is in contrast to the focus on boolean cube here, which allows us to study the fine-grained effect of hyperparameters on the spectra, leading to a variety of insights into neural networks’ generalization properties. B UNIVERSALITY OF OUR BOOLEAN CUBE OBSERVATIONS IN OTHER INPUT DISTRIBUTIONS Using the spectral theory we developed in this paper, we made three observations, that can be roughly summarized as follows: 1) the simplicity bias noted by Valle-Pérez et al. (2018) is not universal; 2) for each function of fixed “complexity” there is an optimal depth such that networks shallower or deeper will not learn it as well; 3) training last layer only is better than training all layers when learning “simpler” features, and the opposite is true for learning “complex” features. In this section, we discuss the applicability of these observations to distributions that are not uniform over the boolean cube: in particular, the uniform distribution over the sphere √ dSd−1, the standard Gaussian N (0, Id), as well as realistic data distributions such as MNIST and CIFAR10. Simplicity bias The simplicity bias noted by Valle-Pérez et al. (2018), in particular Fig. 1, depends on the finiteness of the boolean cube as a domain, so we cannot effectively test this on the distributions above, which all have uncountable support. Optimal depth With regard to the second observation, we can test whether an optimal depth exists for learning functions over the distributions above. Since polynomial degrees remain the natural indicator of complexity for the sphere and the Gaussian (see Appendices H.2 and H.3 for the relevant spectral theory), we replicated the experiment in Fig. 3(b) for these distributions, using the same ground truth functions of polynomials of different degrees. The results are shown in Fig. 6. We see the same phenomenon as in the boolean cube case, with an optimal depth for each degree, and with the optimal depth increasing with degree. For MNIST and CIFAR10, the notion of “feature complexity” is less clear, so we will not test the hypothesis that “optimal depth increases with degree” for these distributions but only test for the existence of the optimal depth for the ground truth marked by the labels of the datasets. We do so by training a large number of MLPs of varying depth on these datasets until convergence, and plot the results in Fig. 7. This figure clearly shows that such an optimal depth exists, such that shallower or deeper networks do monotonically worse as the depth diverge away from this optimal depth. Again, the existence of the optimal depth is not obvious at all, as conventional deep learning wisdom would have one believe that adding depth should always help. Training last layer only vs training all layers Finally, we repeat the experiment in Fig. 3(c) for the sphere and the standard Gaussian, with polynomials of different degrees as ground truth functions. The results are shown in Fig. 8. We see the same phenomenon as in the boolean cube case: for degree 0 polynomials, training last layer works better in general, but for higher degree polynomials, training all layers fares better. Note that, unlike the sphere and the Gaussian, whose spectral theory tells us that (harmonic) polynomial degree is a natural notion of complexity, for MNIST and CIFAR10 we have much less clear idea of what a “complex” or a “simple” feature is. Therefore, we did not attempt a similar experiment on these datasets. C THEORETICAL VS EMPIRICAL MAX LEARNING RATES UNDER DIFFERENT PREPROCESSING FOR MNIST AND CIFAR10 In the main text Fig. 5, on the MNIST and CIFAR10 datasets, we preprocessed the data by centering and normalizing to the sphere (see Appendix E.2 for a precise description). With this preprocessing, our theory accurately predicts the max learning rate in practice. In general, if we go by another preprocessing, such as PCA or ZCA, or no preprocessing, our theoretical max learning rate 1/Φ(0) is less accurate but still correlated in general. The only exception seems to be relu networks on PCA- or ZCA- preprocessed CIFAR10. See Fig. 9. Theoretical vs empirical max learning rate under different preprocessing D VISUALIZING THE SPECTRAL EFFECTS OF σ2w, σ 2 b , AND DEPTH While in the main text, we summarized several trends of interest kn several plots, they do not give the entire picture of how eigenvalues and fractional variances vary with σ2w, σ 2 b , and depth. Here we try to present this relationship more completely in a series of contour plots. Fig. 10 shows how varying depth and σ2b changes the fractional variances of each degree, for relu CK and NTK. We are fixing σ2w = 2 in the CK plots, as the fractional variances only depend on the ratio σ 2 b/σ 2 w; even though this is not true for relu NTK, we fix σ2w = 2 as well for consistency. For erf, however, the fractional variance will crucially depend on both σ2w and σ 2 b , so we present 3D contour plots of how σ 2 w, σ 2 b , and depth changes fractional variance in Fig. 13. Complementarily, we also show in Figs. 11 and 12 a few slices of these 3D contour plots for different fixed values of σ2b , for erf CK and NTK. E EXPERIMENTAL DETAILS E.1 FIG. 3 Fig. 3(a), (b) and (c) differ in the set of hyperparameters they involve (to be specified below), but in all of them, we train relu networks against a randomly generated ground truth multilinear polynomial, with input space 128 and L2 loss L(f) = Ex∈ d(f(x)− f∗(x))2. Training We perform SGD with batch size 1000. In each iteration, we freshly sample a new batch, and we train for a total of 100,000 iterations, so the network potentially sees 108 different examples. At every 1000 iterations, we validate the current network on a freshly drawn batch of 10,000 examples. We thus record a total of 100 validation losses, and we take the lowest to be the “best validation loss.” Generating the Ground Truth Function The ground truth function f∗(x) is generated by first sampling 10 monomials m1, . . . ,m10 of degree k, then randomly sampling 10 coefficients a1, . . . , a10 for them. The final function is obtained by normalizing {ai} such that the sum of their squares is 1: f∗(x) def = 10∑ i=1 aimi/ 10∑ j=1 a2j . (11) Hyperparameters for Fig. 3(a) • The learning rate is half the theoretical maximum learning rate7 12 max(µ0, µ1) −1 • Ground truth degree k ∈ {0, 1, 2, 3} • Depth ∈ {0, . . . , 10} • activation = relu • σ2w = 2 • σ2b = 0 • width = 1000 • 10 random seeds per hyperparameter combination • training last layer (marked “ck”), or all layers (marked “ntk”). In the latter case, we use the NTK parametrization of the MLP (Eq. (MLP)). Hyperparameters for Fig. 3(b) • The learning rate is half the theoretical maximum learning rate 12 max(µ0, µ1) −1 • Ground truth degree k ∈ {0, 1, 2, 3} • Depth ∈ {0, . . . , 10} • activation = relu • σ2w = 2 • σ2b = 0 • width = 1000 • 100 random seeds per hyperparameter combination • training last layer weight and bias only 7Note that, because the L2 loss here is L(f) = Ex∈ d(f(x) − f∗(x))2, the maximum learning rate is λ−1max = max(µ0, µ1) −1 (see Thm 4.1). If we instead adopt the convention L(f) = Ex∈ d 12 (f(x)− f ∗(x))2, then the maximum learning rate would be 2λ−1max = 2max(µ0, µ1)−1 Algorithm 1 Binary Search for Empirical Max Learning Rate upper ← 16× theoretical max lr lower ← 0 tol← 0.01× theoretical max lr while |upper − lower| > tol do α← (upper + lower)/2 Run SGD with learning rate α for 1000 iterations if loss diverges then upper ← α else lower ← α end if end while Output: upper Hyperparameters for Fig. 3(c) • The learning rate ∈ {0.05, 0.1, 0.5} • Ground truth degree k ∈ {0, 1, . . . , 6} • Depth ∈ {1, . . . , 5} • activation ∈ {relu, erf} • σ2w = 2 for relu, but σ2w ∈ {1, 2, . . . , 5} for erf • σ2b ∈ {0, 1, . . . , 4} • width = 1000 • 1 random seed per hyperparameter combination • Training all layers, using the NTK parametrization of the MLP (Eq. (MLP)) E.2 MAX LEARNING RATE EXPERIMENTS Here we describe the experimental details for the experiments underlying Figs. 5 and 9. Theoretical max learning rate For a fixed setup, we compute Φ according to Eq. (CK) (if only last layer is trained) or Eq. (NTK) (if all layers are trained). For ground truth problems where the output is n-dimensional, the theoretical max learning rate is nΦ(0)−1; in particular, the max learning rates for MNIST and CIFAR10 are 10 times those for boolean cube, sphere, and Gaussian. This is because the kernel for an multi-output problem effectively becomes 1 n K⊕n = 1 n K 0 0 0 . . . 0 0 0 K where the 1n factor is due to the 1 n factor in the scaled square loss L(f, f ∗) = Ex∼X 1n ∑n i=1(f(x)i− f∗(x)i) 2. The top eigenvalue for 1nK ⊕n is just 1n times the top eigenvalue for K. Empirical max learning rate For a fixed setup, we perform binary search for the empirical max learning rate as in Algorithm 1. Preprocessing In Fig. 5, for MNIST and CIFAR10, we center and project each image onto the sphere √ dSd−1, where d = 28× 28 = 784 for MNIST and d = 3× 32× 32 = 3072 for CIFAR10. More precisely, we compute the average image x̄ over the entire dataset, and we preprocess each image x as √ d x−x̄‖x−x̄‖ . In Fig. 9, there are three different preprocessing schemes. For “no preprocessing,” we load the MNIST and CIFAR10 data as is. In “PCA128,” we take the top 128 eigencomponents of the data, so that the data has only 128 dimensions. In “ZCA128,” we take the top 128 eigencomponents but rotate it back to the original space, so that the data still has dimension d, where d = 28×28 = 784 for MNIST and d = 3× 32× 32 = 3072 for CIFAR10. Hyperparameters • Target function: For boolean cube, sphere, and standard Gaussian, we randomly sample a degree 1 polynomial as in Eq. (11). For MNIST and CIFAR10, we just use the label in the dataset, encoded as a one-hot vector for square-loss regression. • Depth ∈ {1, 2, 4, 8, 16} • activation ∈ {relu, erf} • σ2w = 2 for relu, but σ2w ∈ {1, 2, . . . , 5} for erf • σ2b ∈ {1, . . . , 4} • width = 1000 • 1 random seed per hyperparameter combination • Training last layer (CK) or all layers (NTK). In the latter case, we use the NTK parametriza- tion of the MLP (Eq. (MLP)). F REVIEW OF THE THEORY OF NEURAL TANGENT KERNELS F.1 CONVERGENCE OF INFINITE-WIDTH KERNELS AT INITIALIZATION Conjugate Kernel Via a central-limit-like intuition, each unit hl(x)α of Eq. (MLP) should behave like a Gaussian as width nl−1 →∞, as it is a sum of a large number of roughly independent random variables (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017). The devil, of course, is in what “roughly independent” means and how to apply the central limit theorem (CLT) to this setting. It can be done, however, (Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018), and in the most general case, using a “Gaussian conditioning” technique, this result can be rigorously generalized to almost any architecture Yang (2019). In any case, the consequence is that, for any finite set S ⊆ X , {hlα(x)}x∈S converges in distribution to N (0,Σl(S, S)), as min{n1, . . . , nl−1} → ∞, where Σl is the CK as given in Eq. (CK). Neural Tangent Kernel By a slightly more involved version of the “Gaussian conditioning” technique, Yang (2019) also showed that, for any x, y ∈ X , 〈∇θhL(x),∇θhL(y)〉 converges almost surely to ΘL(x, y) as the widths tend to infinity, where Θl is the NTK as given in Eq. (NTK). F.2 FAST EVALUATIONS OF CK AND NTK For certain φ like relu or erf, Vφ and V′φ can be evaluated very quickly, so that both the CK and NTK can be computed in O(|X |2L) time, where X is the set of points we want to compute the kernel function over, and L is the number of layers. Fact F.1 (Cho and Saul (2009)). For any kernel K Vrelu(K)(x, x ′) = 1 2π ( √ 1− c2 + (π − arccos c)c) √ K(x, x)K(x′, x′) V′relu(K)(x, x ′) = 1 2π (π − arccos c) where c = K(x, x′)/ √ K(x, x)K(x′, x′). Fact F.2 (Neal (1995)). For any kernel K, Verf(K)(x, x ′) = 2 π arcsin K(x, x′)√ (K(x, x) + 0.5)(K(x′, x′) + 0.5) V′erf(K)(x, x ′) = 4 π √ (1 + 2K(x, x))(1 + 2K(x′, x′))− 4K(x, x′)2 . Fact F.3. Let φ(x) = exp(x/σ) for some σ > 0. For any kernel K, Vφ(K)(x, x ′) = exp ( K(x, x) + 2K(x, x′) +K(x′, x′) 2σ2 ) . F.3 LINEAR EVOLUTION OF NEURAL NETWORK UNDER GD Remarkably, the NTK governs the evolution of the neural network function under gradient descent in the infinite-width limit. First, let’s consider how the parameters θ and the neural network function f evolve under continuous time gradient flow. Suppose f is only defined on a finite input space X = {x1, . . . , xk}. We will visualize f(X ) = f(x1) ... f(xk) , ∇fL = ∂L ∂f(x1) ... ∂L ∂f(xk) , θ = θ1 ... θn , ∇θf = ∂f(x1) ∂θ1 · · · ∂f(x k) ∂θ1 ... . . . ... ∂f(x1) ∂θn · · · ∂f(x k) ∂θn (best viewed in color). Then under continuous time gradient descent with learning rate η, ∂t θt = −η∇θL(ft) = −η ∇θft · ∇fL(ft) , ∂t ft = ∇θft > · ∂t θt = −η ∇θft > · ∇θft · ∇fL(ft) = −η Θt · ∇fL(ft) (12) where Θt = ∇θf>t · ∇θft ∈ Rk×k is of course the (finite width) NTK. These equations can be visualized as ∂t = −η · , ∂t = · ∂t = −η · · = −η · Thus f undergoes kernel gradient descent with (functional) loss L(f) and kernel Θt. This kernel Θt of course changes as f evolves, but remarkably, it in fact stays constant for f being an infinitely wide MLP (Jacot et al., 2018): ∂tft = −ηΘ · ∇fL(ft), (Training All Layers) where Θ is the infinite-width NTK corresponding to f . A similar equation holds for the CK Σ if we train only the last layer, ∂tft = −ηΣ · ∇fL(ft). (Training Last Layer) If L is the square loss against a ground truth function f∗, then ∇fL(ft) = 12k∇f‖ft − f ∗‖2 = 1 k (ft−f ∗), and the equations above become linear differential equations. However, typically we only have a training set X train ⊆ X of size far less than |X |. In this case, the loss function is effectively L(f) = 1 2|X train| ∑ x∈X train (f(x)− f∗(x))2, with functional gradient ∇fL(f) = 1 |X train| Dtrain · (f − f∗), where Dtrain is a diagonal matrix of size k × k whose diagonal is 1 on x ∈ X train and 0 else. Then our function still evolves linearly ∂tft = −η(K ·Dtrain) · (ft − f∗) (13) where K is the CK or the NTK depending on which parameters are trained. F.4 RELATIONSHIP TO GAUSSIAN PROCESS INFERENCE. Recall that the initial f0 in Eq. (13) is distributed as a Gaussian process N (0,Σ) in the infinite width limit. As Eq. (13) is a linear differential equation, the distribution of ft will remain a Gaussian process for all t, whether K is CK or NTK. Under suitable conditions, it can be shown that (Lee et al., 2019), in the limit as t→∞, if we train only the last layer, then the resulting function f∞ is distributed as a Gaussian process with mean f̄∞ given by f̄∞(x) = Σ(x,X train)Σ(X train,X train)−1f∗(X train) and kernel Var f∞ given by Var f∞(x, x ′) = Σ(x, x′)− Σ(x,X train)Σ(X train,X train)−1Σ(X train, x′). These formulas precisely described the posterior distribution of f given prior N (0,Σ) and data {(x, f∗(x))}x∈X train . If we train all layers, then similarly as t→∞, the function f∞ is distributed as a Gaussian process with mean f̄∞ given by (Lee et al., 2019) f̄∞(x) = Θ(x,X train)Θ(X train,X train)−1f∗(X train). This is, again, the mean of the Gaussian process posterior given prior N (0,Θ) and the training data {(x, f∗(x))}x∈X train . However, the kernel of f∞ is no longer the kernel of this posterior, but rather is an expression involving both the NTK Θ and the CK Σ; see Lee et al. (2019). In any case, we can make the following informal statement in the limit of large width Training the last layer (resp. all layers) of an MLP infinitely long, in expectation, yields the mean prediction of the GP inference given prior N (0,Σ) (resp. N (0,Θ)). G A BRIEF REVIEW OF HILBERT-SCHMIDT OPERATORS AND THEIR SPECTRAL THEORY In this section, we briefly review the theory of Hilbert-Schmidt kernels, and more importantly, to properly define the notion of eigenvalues and eigenfunctions. A function K : X 2 → R is called a Hilbert-Schmidt operator if K ∈ L2(X × X ), i.e. ‖K‖2HS def = E x,y∼X K(x, y)2 <∞. ‖K‖2HS is known as the Hilbert-Schmidt norm of K. K is called symmetric if K(x, y) = K(y, x) and positive definite (resp. semidefinite) if E x,y∼X f(x)K(x, y)f(y) > 0 (resp. ≥ 0) for all f ∈ L2(X ) not a.e. zero. A spectral theorem (Mercer’s theorem) holds for Hilbert-Schmidt operators. Fact G.1. If K is a symmetric positive semidefinite Hilbert-Schmidt kernel, then there is a sequence of scalars λi ≥ 0 (eigenvalues) and functions fi ∈ L2(X ) (eigenfunctions), for i ∈ N, such that ∀i, j, 〈fi, fj〉 = I(i = j), and K(x, y) = ∑ i∈N λifi(x)fi(y) where the convergence is in L2(X × X ) norm. This theorem allows us to speak of the eigenfunctions and eigenvalues, which are important for training and generalization considerations when K is a kernel used in machine learning, as discussed in the main text. A sufficient condition for K to be a Hilbert-Schmidt kernel in our case (concerning only probability measure on X ) is just that K is bounded. All Ks in this paper satisfy this property. H EIGENDECOMPOSITION OF NEURAL KERNEL ON DIFFERENT DOMAINS H.1 BOOLEAN CUBE From the Fourier Series Perspective. We continue from the discussion of the boolean cube in the main text. Recall that T∆ is the shift operator on functions that sends Φ(·) to Φ(· −∆). Notice that, if we let Φ(t) = eκt for some κ ∈ C, then T∆Φ(s) = e−κ∆ · eκt. Thus Φ is an “eigenfunction” of the operator T∆ with eigenvalue e−κ∆. In particular, this implies that Proposition H.1. Suppose Φ(t) = et/σ 2 , as in the case whenK is the CK or NTK of a 1-layer neural network with nonlinearity exp(·/σ), up to multiplicative constant (Fact F.3). Then the eigenvalue µk over the boolean cube d equals µk = 2 −d(1− exp(−∆/σ2))k(1 + exp(−∆/σ2))d−k · exp(1/σ2) where ∆ = 2/d. It would be nice if we can express any Φ as a linear combination of exponentials, so that Eq. (5) simplifies in the fashion of Prop H.1 — this is precisely the idea of Fourier series. We will use the theory of Fourier analysis on the circle, and for this we need to discuss periodic functions. Let Φ̃ : [−2, 2]→ R be defined as Φ̃(x) = Φ(x) if x ∈ [−1, 1] Φ(2− x) if x ∈ [1, 2] Φ(−2− x) if x ∈ [−2,−1]. See Fig. 14 for an example illustration. Note that if Φ is continuous on [−1, 1], then Φ̃ is continuous as a periodic function on [−2, 2]. The Fourier basis on functions over [−2, 2] is the collection {t 7→ e 12πist}s∈Z. Under generic conditions (for example if Ψ ∈ L2[−2, 2]), a function Ψ has an associated Fourier series ∑ s∈Z Ψ̂(s)e 1 2πist. We briefly review basic facts of Fourier analysis on the circle. Recall the following notion of functions of bounded variation. Definition H.2. A function f : [a, b]→ R is said to have bounded variation if sup P nP−1∑ i=0 |f(xi+1)− f(xi)| <∞, where the supremum is taken over all partitions P of the interval [a, b], P = {x0, . . . , xnP }, x0 ≤ x1 ≤ · · · ≤ xnP . Intuitively, a function of bounded variation has a graph (in [a, b]× R) of finite length. Fact H.3 (Katznelson (2004)). A bounded variation function f : [−2, 2]→ R that is periodic (i.e. f(−2) = f(2)) has a pointwise-convergent Fourier series: lim T→∞ ∑ s∈[−T,T ] Ψ̂(s)e 1 2πist → Ψ(t), ∀t ∈ [−2, 2]. From this fact easily follows the following lemma. Lemma H.4. Suppose Φ is continuous and has bounded variation on [−1, 1]. Then Φ̃ is also continuous and has bounded variation, and its Fourier Series (on [−2, 2]) converges pointwise to Φ̃. Proof. Φ̃ is obviously continuous and has bounded variation as well, and from Fact H.3, we know a periodic continuous function with bounded variation has a pointwise-convergent Fourier Series. Certainly, T∆ sends continuous bounded variation functions to continuous bounded variation functions. Because T∆e 1 2πist = e− 1 2πis∆e 1 2πist, T∆ ∑ s∈Z Ψ̂(s)e 1 2πist = ∑ s∈Z Ψ̂(s)e− 1 2πis∆e 1 2πist whenever both sides are well defined. If Ψ is continuous and has bounded variation then T∆Ψ is also continuous and has bounded variation, and thus its Fourier series, the RHS above, converges pointwise to T∆Ψ. Now, observe (I − T∆)k(I + T∆)d−kΦ̃(x) = d∑ r=0 Cd−k,kr Φ̃ (x− r∆) (I − T∆)k(I + T∆)d−kΦ̃(1) = d∑ r=0 Cd−k,kr Φ (( d 2 − r ) ∆ ) = µk Expressing the LHS in Fourier basis, we obtain Theorem H.5. µk = ∑ s∈Z is(1− e− 12πis∆)k(1 + e− 12πis∆)d−k ˆ̃Φ(s) where ˆ̃Φ(s) = 1 4 ∫ 2 −2 Φ̃(t)e− 1 2πist dt = 1 4 ∫ 1 −1 Φ(t)(e− 1 2πist + (−1)se 12πist) dt = { 1 2 ∫ 1 −1 Φ(t) cos( 1 2πst) dt if s is even − i2 ∫ 1 −1 Φ(t) sin( 1 2πst) dt if s is odd denote the Fourier coefficients of Φ̃ on [−2, 2]. (Here i is the imaginary unit here, not an index). Recovering the values of Φ given the eigenvalues µ0, . . . , µd. Conversely, given eigenvalues µ0, . . . , µd corresponding to each monomial degree, we can recover the entries of the matrix K. Theorem H.6. For any x, y ∈ d with Hamming distance r, K(x, y) = Φ (( d 2 − r ) ∆ ) = d∑ k=0 Cd−r,rk µk, where Cd−r,rk = ∑ j=0(−1)k+j ( d−r j )( r k−j ) as in Eq. (7). Proof. Recall that for any S ⊆ [d], χS(x) = xS is the Fourier basis corresponding to S (see Eq. (3)). Then by converting from the Fourier basis to the regular basis, we get Φ (( d 2 − r ) ∆ ) = K(x, y) for any x, y ∈ d with Hamming distance r = d∑ k=0 µk ∑ |S|=k χS(x)χS(y). If x and y differ on a set T ⊆ [d], then we can simplify the inner sum Φ (( d 2 − r ) ∆ ) = d∑ k=0 µk ∑ |S|=k (−1)|S∩T | = d∑ k=0 µkC d−r,r k . Remark H.7. If we let T be the operator that sends µ• 7→ µ•+1, then we have the following operator expression Φ (( d 2 − r ) ∆ ) = [(1 + T )d−r(1− T )rµ]0 Remark H.8. The above shows that the matrix C = {Cd−r,rk }dk,r=0 satisfies C2 = 2dI. H.2 SPHERE Now let’s consider the case when X = √ dSd−1 is the radius√ d sphere in Rd equipped with the uniform measure. Again, because x ∈ X all have the same norm, we will consider Φ as a univariate function with K(x, y) = Φ(〈x, y〉/‖x‖‖y‖) = Φ(〈x, y〉/d). As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), K is diagonalized by spherical harmonics. We review these results briefly below, as we will build on them to deduce spectral information of K on isotropic Gaussian distributions. Review: spherical harmonics and Gegenbauer polynomials. Spherical harmonics are L2 functions on Sd−1 that are eigenfunctions of the Laplace-Beltrami operator ∆Sd−1 of Sd−1. They can be described as the restriction of certain homogeneous polynomials in Rd to Sd−1. Denote byHd−1,(l) the space of spherical harmonics of degree l on sphere Sd−1. Then we have the orthogonal decomposition L2(Sd−1) ∼= ⊕∞ l=0Hd−1,(l). It is a standard fact that dimHd−1,(l) = ( d−1+l d−1 ) − ( d−3+l d−1 ) . There is a special class of spherical harmonics called zonal harmonics that can be represented as x 7→ p(〈x, y〉) for specific polynomials p : R→ R, and that possess a special reproducing property which we will describe shortly. Intuitively, the value of any zonal harmonics only depends on the “height” of x along some fixed axis y, so a typical zonal harmonics looks like Fig. 16. The polynomials pmust be one of the Gegenbauer polynomials. Gegenbauer polynomials {C(α)l (t)}∞l=0 are orthogonal polynomials with respect to the measure (1− t2)α− 12 on [−1, 1] (see Fig. 15 for examples), and here we adopt the convention that∫ 1 −1 C(α)n (t)C (α) l (t)(1− t 2)α− 1 2 dt = π21−2αΓ(n+ 2α) n!(n+ α)[Γ(α)]2 I(n = l). (14) Then for each (oriented) axis y ∈ Sd−1 and degree l, there is a unique zonal harmonic Zd−1,(l)y ∈ Hd−1,(l), Zd−1,(l)y (x) def = c−1d,lC ( d−22 ) l (〈x, y〉) for any x, y ∈ Sd−1, where cd,l = d−2d+2l−2 . Very importantly, they satisfy the following Fact H.9 (Reproducing property (Suetin)). For any f ∈ Hd−1,(m), E z∼Sd−1 Zd−1,(l)y (z)f(z) = f(y)I(l = m) E z∼Sd−1 Zd−1,(l)y (z)Z d−1,(m) x (z) = Z d−1,(l) y (x)I(l = m) = c −1 d,lC ( d−22 ) l (〈x, y〉)I(l = m) We also record a useful fact about Gegenbauer polynomials. Fact H.10 (Suetin). C (α) l (±1) = (±1) l ( l + 2α− 1 l ) By a result of Schoenberg (1942), we have the following eigendecomposition of K on the sphere. Theorem H.11 (Schoenberg). Suppose Φ : [−1, 1]→ R is in L2((1− t2) d−12 −1), so that it has the Gegenbauer expansion Φ(t) a.e. = ∞∑ l=0 alc −1 d,lC ( d−22 ) l (t). Then K has eigenspaces Hd−1,(l)√ d def = {f(x/ √ d) : f ∈ Hd−1,(l)} with corresponding eigenval- ues al.
1. How do randomly initialized and trained deep networks perform on simple functions? 2. How does the performance change with depth, activation function, and initialization? 3. How does the input distribution affect the results of spectral analysis? 4. What are the limitations of restricting input distributions to boolean cubes? 5. Can the methods used in the paper be applied to a wider audience? 6. What is the frequency (rank) in Figure 1? 7. Is there a mistake in the description of the y-axis in Figure 1b? 8. Where is the ground truth degree k polynomial used in experiments defined? 9. Are there any minor issues in the paper's writing and clarity?
Review
Review This paper examined the spectrum of NNGP and NTK kernels and answer several questions about deep networks using both analytical results and experimental evidence: * Are randomly initialized and trained deep networks biased to simple functions? * How does this change with depth, activation function, and initialization? All studies are conducted on a space of inputs that is a boolean cube. The input distribution is assumed to be uniform. Though it is argued in Section 3 that the results also generalize to uniform distributions on spheres and isotropic Gaussian distributions. Although this boolean cube setting is followed from previous works on the same topic, it does limit the scope of the paper. Discussions on how this assumption relates to practical problems are missing from the paper. Putting aside the limitations of restricting the input distributions on boolean cubes (and other similar choices), I really like the paper, which demonstrates the powerfulness of spectral analysis. I also found that many analytical results (e.g., computing eigenvalues of a kernel operator with respect to uniform distributions on a boolean cube) in the paper are highly nontrivial to derive, which adds to the value of the paper. These results might seem restricted in terms of deep network theory because of the assumptions on input distributions, but I do believe the methods used can be of interest to a wider audience. Some questions: * In Figure 1, the 10^4 boolean function samples are sorted according to frequency (rank). What precisely is the frequency (rank) here? It shouldn't be the frequency that corresponds to the eigendecomposition because each function sample could always have multiple components with different frequencies. * In Figure 1b, the y-axis is described as normalized eigenvalues, which seems different from degree k fractional variance defined in the next section. The degree k fractional variance is the sum of all normalized eigenvalues for degree k eigenfunctions. Is this difference intended or it is a mistake? * Is the ground truth degree k polynomial used in experiments defined somewhere in the paper? On writing and clarity. Overall I find this paper well-written and a pleasure to read. Some minor issues are * The definition of "neural kernels" seems unnecessary and a bit sudden. It would be helpful to include the definition of Phi just after Eq. (2) for CK and NTK. * For introducing boolean analysis and Fourier series, it might be better to include the formula that explicit shows the expansion f(x) = \sum_{S} f^p(S) X_S(x) before introducing Theorem 3.1.
ICLR
Title In Your Pace: Learning the Right Example at the Right Time Abstract Training neural networks is traditionally done by sequentially providing random mini-batches sampled uniformly from the entire dataset. In our work, we show that sampling mini-batches non-uniformly can both enhance the speed of learning and improve the final accuracy of the trained network. Specifically, we decompose the problem using the principles of curriculum learning: first, we sort the data by some difficulty measure; second, we sample mini-batches with a gradually increasing level of difficulty. We focus on CNNs trained on image recognition. Initially, we define the difficulty of a training image using transfer learning from some competitive ”teacher” network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets. We then suggest a bootstrap alternative to evaluate the difficulty of points using the same network without relying on a ”teacher” network, thus increasing the applicability of our suggested method. We compare this approach to a related version of Self-Paced Learning, showing that our method benefits learning while SPL impairs it. 1 INTRODUCTION Teaching complex tasks to humans and animals can be difficult. Often such tasks cannot be grasped by the learners, or ”students”, immediately, and need to be broken down into simpler problems. Therefore, in order to teach complex tasks, teachers are often required to create a curriculum. The curriculum imposes some order on the learning task; it introduces different concepts at different times, hence exploiting previously learned concepts in order to ease the abstraction of new ones. Imposing a curriculum in order to speed up learning is widely used in the context of human learning, and also routinely used in animal training (Skinner, 1958; Pavlov, 2010; Krueger & Dayan, 2009). In many traditional machine learning approaches, known as supervised learning, a target function is estimated by using a set of labeled examples. The examples can be thought of as given by a teacher while the learning algorithm can be thought of as a student. The field of curriculum learning (CL), which is motivated by the idea of a curriculum in human learning, attempts at imposing some structure on the labeled data. Such structure essentially relies on a notion of ”easy” and ”hard” examples and utilizes this distinction in order to teach the student how to generalize easier concepts before harder ones. Empirically, the use of CL has been shown to accelerate and improve the learning process (e.g. Selfridge et al., 1985; Bengio et al., 2009). In this work, we aim at extending the understanding of CL in the context of deep neural learning. More specifically, we wish to understand to what extent curriculum can improve the accuracy and convergence rate of deep neural networks. The main challenge in making CL practical is, arguably, finding a way to construct a good curriculum for a newly unseen database. In order to do so, we investigate two ideas motivated by transfer learning and bootstrapping respectively. When establishing a curriculum for human students, teachers need to arrange the material in a way that will present simple concepts before harder ones, so that the abstraction of simple ideas can help the student grasp more complex ones (Hunkins & Ornstein, 2016). However, sorting the concepts by difficulty is not sufficient. The teacher also needs to attend to the pace by which the material is presented – going over the simple ideas too fast may lead to more confusion than benefit, while moving along too slowly may lead to boredom and unproductive learning (Hunkins & Ornstein, 2016). These principles can also be beneficial when the learner is a neural network. Specifically, formalizing and generalizing what was implicitly done in Weinshall et al. (2018), we decompose the problem of CL and define two separate - but closely related - functions. The first function, termed scoring function, determines the ”hardness” or ”complexity” of each example in the data. The scoring function enables us to sort the data by concept difficulty, allowing us to present to the network the easier (and presumably simpler) examples first. The underlying assumption is that generalization from the easier examples can simplify the learning of harder examples in the data. The second function, termed pacing function, determines the pace by which data is presented to the network. The pace depends on both the data itself and the learner. In our work, we analyze several scoring and pacing functions, investigating their inter-dependency and presenting ways to combine them in order to achieve faster learning and better generalization. The main challenge is, arguably, how to obtain an effective scoring function without additional human supervision. To this end we investigate two approaches, each providing a different estimator for the ideal scoring function: (i) Knowledge transfer. The first scoring function is based on transfer learning from networks trained on the large and versatile Imagenet dataset (Deng et al., 2009; Weinshall et al., 2018). (ii) Bootstrapping. The second scoring function is based on self-tutoring - we train the network once without curriculum, then use the resulting classifier to rank the training data in order to train the same network again from scratch. Both scoring functions are shown in Section 3 to speed up learning and improve the generalization of neural networks. In many approaches, including Self-Paced Learning (SPL), Active-Learning and hard example mining (Kumar et al., 2010; Schein & Ungar, 2007; Shrivastava et al., 2016), the mini-batches which presented to the learner model are as sampled dynamically, based at each time point on the current hypothesis of the model. While in some contexts these approaches are beneficial (Chang et al., 2017; Zhang et al., 2017), they are based on the knowledge of the student at a specific time point. While a student can report what is easy/hard for it right now, it might be oblivious to some aspects of the bigger problem at hand, ignoring concepts which if learned early, could prove helpful in a later time. In the context of linear regression loss, Weinshall et al. (2018) showed that such distinction indeed holds: while it is beneficial to prefer points with lower loss with respect to the target hypothesis as suggested by CL, it is on the other hand beneficial to prefer points with higher loss with respect to the current hypothesis in agreement with hard data mining (Shrivastava et al., 2016) and boosting, contrary to SPL. To examine this somewhat confusing point, we have implemented a simplified version of the procedure described above, where the scoring function is based on the loss of the training points with respect to the current hypothesis, both in ascending and descending orders. These variants of SPL and hard example mining respectively learn slower and reach lower final accuracy when compared to self-taught, throughout all of our experiments. We have also investigated three pacing functions. (i) Fixed exponential pacing presents the learner initially with a small percentage of the data, increasing the amount exponentially every fixed number of learning iterations. (ii) Varied exponential pacing allows the number of iterations in each step to vary as well. (iii) Single-step pacing is a simplified version of the first protocol, where mini-batches are initially sampled from a fixed fraction of the data that includes the easiest examples, after which mini-batches are sampled from the whole data as usual. We show that the three functions have comparable performance, and analyze the complexity of their use. Previous work. While remaining in the fringes of machine learning, there has been some recent work on CL and its applications. Bengio et al. (2009) introduced the idea of CL for machine learning algorithms, showing simple examples where CL benefits learning. Weinshall et al. (2018) proved that CL boosts the speed of convergence in the convex case of linear regression. Otherwise most prior art is empirical, and almost always ranking by difficulty (i.e., the scoring function defined above) is provided by the user based on prior knowledge (in other words, supervision) as in Jesson et al. (2017). In a closely related line of works, a pair of teacher and student networks are trained simultaneously, where mini-batches for the student network are sampled dynamically by the teacher, based on the student’s output in each time point (Jiang et al., 2018; Fan et al., 2018). As opposed to our method, these works base the curriculum on the current hypothesis of the students, and achieve better performance for corrupted (Jiang et al., 2018) or smaller (Fan et al., 2018) datasets, instead of improved generalization on the original dataset. Our contribution, with respect to this previous work, is to provide a formal definition of CL algorithms by way of 2 functions for scoring and pacing, analyze and comparatively evaluate these functions, and show how CL can benefit learning in CNNs even without human supervision about the ranking of examples by difficulty and in a problem-free manner. 2 CURRICULUM LEARNING Curriculum learning deals with the question of how to use prior knowledge about the difficulty of the training examples, in order to sample each mini-batch non-uniformly and thus boost the rate of learning and the accuracy of the final classifier. The paradigm of CL is based on the intuition that it helps the learning process when the learner is presented with simple concepts first. 2.1 NOTATIONS AND DEFINITIONS Let X = {(xi, yi)}Ni=1 denote the data, where xi ∈ Rd denotes a single data point and yi ∈ [K] its corresponding label. Let Fθ : Rd → [K] denote the target classifier (or learner), and mini-batch B ⊆ X denote a subset of X. In the most common training procedure, which is a robust variant of Stochastic Gradient Descent (SGD), Fθ is trained sequentially when given as input a sequence of mini-batches [B1, ...,BM ] (Shalev-Shwartz & Ben-David, 2014). The common approach – denoted vanilla in the following sections – samples each mini-batch Bi uniformly from X. Both in the common approach and in our work, the size of each mini-batch remains constant, to be considered as a hyper-parameter defining the learner. We measure the difficulty of point xi by its minimal loss with respect to the set of optimal hypotheses under consideration. We define a scoring function (or a ”difficulty” function) to be any function f : X → R, and say that example (xi, yi) is more ”difficult” than example (xj , yj) if f (xi, yi) > f (xj , yj). Choosing f is the main challenge of CL, as it encodes the prior knowledge of the teacher. We define a pacing function to be a function gFθ : [M ] → [N ], which may depend on the learner Fθ. The pacing function is used to determine a sequence of subsets X ′ 1, ...,X ′ M ⊆ X, of size |X′i| = gFθ (i), from which {Bi}Mi=1 are sampled uniformly. In CL the i-th subset X ′ i includes the first gFθ (i) elements of the training data when sorted by the scoring function f in an ascending order. Although the choice of the subset can be encoded in the distribution from which each Bi is sampled, adding a pacing function simplifies the exposition and analysis. 2.2 CURRICULUM LEARNING METHOD Together, each scoring function f and pacing function gFθ define a curriculum. Any learning algorithm which uses the ensuing sequence [Bi]Mi=1 is a curriculum learning algorithm. We note that in order to avoid bias when picking a subset of the N examples for some N , it is important to keep the sample balanced with the same number of examples from each class as in the training set. Pseudo-code for the CL algorithm is given in Alg. 1. In order to narrow down the specific effects of using a scoring function based on ascending difficulty level, we examine two control conditions. Specifically, we define 2 additional scoring functions and corresponding algorithms: (i) The anti-curriculum algorithm uses the scoring function f ′ = −f , where the training examples are sorted in a descending order; that results in presenting the harder examples before the easier ones. (ii) The random-curriculum algorithm (henceforth denoted random) uses a scoring function where the training examples are randomly sorted. 2.3 SCORING AND PACING FUNCTIONS We evaluate two scoring functions: (i) Transfer scoring function, computed as follows: First, take the pre-trained Inception network (Szegedy et al., 2016) and run each training image through it, using the activation levels of its penultimate layer as a feature vector (Caruana, 1995). Second, use these features to train a classifier and use its confidence score as the scoring function for each image1. (ii) Self-taught scoring function, computed as follows: First, train the network using uniformly sampled mini-batches (the vanilla method). Second, compute this network’s confidence score for each image to define a scoring function2. Although the pacing function can be any function gFθ : [M ]→ [N ], we limit ourselves to monotonic increasing functions so that the likelihood of the easier examples can only decrease. For simplicity, gFθ is limited to staircase functions. Thus each pacing function is defined by the following hyper-parameters, where step denotes all the learning iterations during which gFθ remains constant: step length - the number of iterations in each step; increase - an exponential factor used to increase the size of the data used for sampling mini-batches in each step; starting percent - the fraction of the data in the initial step. An illustration of these parameters can be seen in Fig. 1. We evaluate three pacing functions: (i) Fixed exponential pacing has a fixed step length, and exponentially increasing data size in each step. Formally, the pacing function is given by: gFθ (i) = min ( starting percent · increaseb i step length c, 1 ) ·N 1Similar results can be obtained when using different confidence scores (e.g, the classifier’s margin), different classifiers (e.g, linear SVM), and different teacher networks (e.g, VGG-16 (Simonyan & Zisserman, 2014), Resnet (He et al., 2016)). For more details, see Appendix A. 2Theoretically we can use this method repeatedly, as discussed in Appendix B. Algorithm 1: Curriculum learning method Input : pacing function gFθ , scoring function f , labeled data X. Output: sequence of mini-batches [ B′1, ...,B ′ M ] . 1 sort X according to f , in ascending order; 2 result← []; 3 for i = 1, ...,M do 4 size← gFθ (i); 5 X′i ← X [1, ..., size]; 6 uniformly sample B′i from X ′ ; 7 append B′i to result; 8 end 9 return result; (ii) Varied exponential pacing, which allows step length to vary as well3: gFθ (i) = min ( starting percent · increase ∑#steps k=1 1[i>step lengthk], 1 ) ·N The total number of steps can be calculated from starting percent and increase: #step = d− logincrease(starting percent)e (iii) Single step pacing, which is a simplification of the staircase function into a step function: gFθ (i) = starting percent 1[i<step length] ·N This function has only 2 hyper-parameters, hence it is simpler to use than the previous two. 3 EMPIRICAL EVALUATION Methodology. All the code used in this work will be published upon acceptance. We define 4 empirical cases: Case 1 replicates the experimental design described in (Weinshall et al., 2018), by using the same dataset and network architecture. The dataset is the ”small mammals” superclass of CIFAR-100 (Krizhevsky & Hinton, 2009), containing a subset of 3000 images from CIFAR100, divided into 5 classes of small mammals (hamster, mouse, rabbit, shrew, squirrel). Each class contains 500 training images and 100 test images. The neural network is a moderate size handcrafted convolutional network, whose architecture details can be found in Appendix C. Cases 2 and 3 adopt the same architecture used above while being applied to the entire CIFAR-10 and CIFAR100 datasets, where the network’s output layer is adjusted to size 10 and 100 respectively. Case 4 uses a public-domain VGG-based architecture4, which achieves competitive results (Simonyan & Zisserman, 2014; Liu & Deng, 2015), to classify the CIFAR-100 dataset. Hyper-parameter tuning. As in all empirical studies involving deep learning, the results are quite sensitive to the values of the hyper-parameters, hence parameter tuning is required. Issues related to how a fair comparison between the different conditions is achieved are discussed in Appendix B. In practice, in order to reduce the computation time of parameter tuning, we varied only the first 2 step length instances in the varied exponential pacing condition. Accordingly, fixed exponential pacing, varied exponential pacing and single step pacing define 3, 5 and 2 new hyper-parameters respectively, referred to henceforth as the pacing hyper-parameters. In the CL framework, the use of a pacing function affects the optimal values of other hyperparameters, in particular, the learning rate. Specifically, since it significantly reduces the size of the data-set from which each mini-batch is sampled, this has the concomitant effect of increasing the effective learning rate. As a result, when using the fixed exponential or the single step pacing functions, the learning rate must be tuned separately for every test condition. As traditionally done (e.g Simonyan & Zisserman, 2014; Szegedy et al., 2016; He et al., 2016), we set an initial learning rate and decrease it exponentially every fixed number of iterations. This method gives rise to 3 learning rate hyper-parameters which require tuning: (i) the initial learning rate; (ii) the factor by which the learning rate is decreased; (iii) the length of each step with constant learning rate5. When varied exponential pacing is used, varying step length has the opposite concomitant effect on the learning rate, as it determines the number of mini-batch samples in each step. Effective tuning of this parameter can make the additional tuning of parameters affecting the learning rate redundant. In practice, in order to reach the improvement achieved by the fixed exponential pacing, we decrease the corresponding learning rate parameters used in the vanilla condition by some small factor6. 3.1 RESULTS: CL BENEFITS LEARNING Case 1: A moderate size network is trained to distinguish 5 classes from CIFAR-100, which are members of the same super-class as defined in the original dataset. Results are shown in Fig. 2. 3In practice, to avoid an unfeasible need to tune too many hyper-parameters, we vary only the first two step length instances and fix the rest. As shown later on, this is reasonable as most of the power of the curriculum lies in the first few steps. 4The code for the VGG network is available at https://github.com/geifmany/cifar-vgg. 5For more details, see Appendix B. 6In the results reported below we used a reduction of 10%, with similar behavior for other nearby choices. Curriculum learning is clearly and significantly beneficial - learning starts faster, and converges to a better solution. We observe that the performance of CL with a random scoring function is similar to vanilla, indicating that the main reason for the improvement achieved by CL is due to its beneficial transfer scoring function. In fact, although tuned separately, the learning rate hyper-parameters for both the random and the curriculum test conditions are very similar, confirming that the improved performance is due to the use of an effective transfer scoring function. To check the robustness of these results, we repeated the same empirical evaluation using different super-classes of CIFAR-100, with similar results (see Appendix A). Interestingly, we note that the observed advantage of CL is more significant when the task is more difficult (i.e. lower vanilla test accuracy). The reason may be that in easier problems there is a sufficient number of easy examples in each mini-batch even without CL. Although the results reported here are based on transfer from the Inception network, we are able to obtain the same results using scoring functions based on transfer learning from other large networks, including VGG-16 and Resnet, as shown in Appendix A. Cases 2 and 3: Similar empirical evaluation as in case 1, using the same moderate size network to classify two benchmark datasets. The results are shown in Fig. 3. Like before, the test accuracy in the curriculum test condition increases faster and achieves better final performance in both cases, as compared to the vanilla test condition. The beneficial effect of CL is larger when classifying the CIFAR-100 dataset, which is a harder dataset. Case 4: Similar empirical evaluation as in case 1, using a competitive public-domain architecture. Specifically, we use the Inception-based transfer scoring function to train a VGG-based network (Liu & Deng, 2015) to classify the CIFAR-100 dataset. Differently from the previous cases, here we use the varied exponential pacing function with a slightly reduced learning rate, as it has the fewest hyper-parameters to tune, an important factor when training such a big network. Results are shown in Fig. 4a (with no data augmentation), showing the same qualitative results as in the previous cases; CL gives a smaller benefit, but the benefit is still significant. Case 5: Similar empirical evaluation as in case 1, using the same moderate size network to distinguish 7 classes of cats from the ImageNet dataset7. The results are shown in Fig. 5. Again, the test accuracy in the curriculum test condition increases faster and achieves better final performance in the curriculum case, as compared to the vanilla test condition. 3.2 SELF-TAUGHT CURRICULUM LEARNING VS. SELF-PACED LEARNING Curriculum learning is closely related to the idea of Self-Paced Learning (SPL), an iterative procedure where higher weights are given to training examples that have lower cost with respect to the current hypothesis. In fact, SPL may appear similar, or closely related, to the idea of self-taught learning. The main difference between the methods is that self-paced learning determines the scoring function according to the loss with respect to the current hypothesis (or network), while the self-taught scoring function is based on the loss with respect to the final hypothesis of a trained network. In accordance, we define the self-paced scoring function, where each point is scored by its 7For more details, see Appendix C loss with respect to the current network. Note that when using CL to optimize the linear regression loss (see introduction), self-taught curriculum and self-paced learning are discordant. To compare the self-taught scoring function and the self-paced scoring function, we investigate their effect on CL in the context of empirical case 1. Results are shown in Fig. 4b. As expected, we see that CL using the self-taught scoring function improves the test accuracy throughout the entire learning session. On the other hand, CL training using the self-paced scoring function decreases the test accuracy throughout. This decrease is more prominent at the beginning of the learning, where most of the beneficial effects of the curriculum are observed, suggesting that the self-paced scoring function can significantly delay learning. 3.3 THE SCORING FUNCTION: ANALYSIS AND EMPIRICAL EVALUATION In order to analyze the effects of transfer based scoring functions, we turn to analyze the gradients of the network’s weights w.r.t the empirical loss. We evaluate the gradients using a pre-trained vanilla network in the context of case 1. First, for each method and each scoring function, we collect the subset of points used to sample the first mini-batch according to the pacing function gFθ (1) 8. For comparison, we also consider the set of all training points, which are used to compute the exact gradient of the empirical loss in batch learning using GD. We then compute the corresponding set of gradients for the training points in each of these subsets of training points, treating each layer’s parameters as a single vector, and subsequently estimate the gradients’ mean and total variance9, used to evaluate the coherence of the gradients in the first mini-batch of each scoring function. The Euclidean distance between the mean gradient in the different conditions is used to estimate the similarity between the different scoring functions, based on the average preferred gradient. We can now compare the set of gradients thus defined using three transfer scoring functions, which differ in the parent network used for scoring the points: ’VGG-16’, ’Resnet’, and ’Inception’. We include in the comparison the gradients of the random scoring function denoted ’Random’, and the gradients of the whole batch of training data denoted ’All’. Results are shown in Fig. 6. We see in Fig. 6a - blue bars - that the average gradient vectors, computed based on the 3 transfer scoring functions, are quite similar to each other. This suggests that they are pointing towards nearby local minima in parameters space. We also see - green bar - that the average gradient vector computed using a random subset of examples resembles the exact empirical gradient computed using all the training data. This suggests that a random subset provides a reasonable estimate of the true empirical gradient. The picture changes completely when we compute - red bars - the distance between the average gradient corresponding to one of the 3 transfer scoring functions, and the average random gradient or the empirical gradient. Now the distances are rather large, which suggests that CL by transfer indeed stirs the weights towards different local minima in parameter space as compared to vanilla training. 8In this experiment gFθ (1) was set such that it corresponds to 10% of the data or 250 examples. This number was set arbitrarily, with similar qualitative results obtained for a large range of other choices. 9As customary, total variance denotes the trace of the covariance matrix. We see in Fig. 6b that the total variance for the 3 transfer scoring functions is much smaller than the total variance of some random subset of the whole training set. This intuitive result demonstrates the difference between training with easier examples and training with random examples, and may – at least partially – explain the need for a different learning rate when training with easier examples. 3.4 ALTERNATIVE PACING FUNCTIONS Single step pacing. Curriculum learning can be costly, and it affects the entire learning protocol via the pacing function. At the same time, we note that the main effect of the procedure takes place at the beginning of training. This empirical observation may be due, in part, to the fact that the proposed scoring function f is based on transfer from another network trained on a different dataset, which only approximates the unknown ideal scoring function. Possibly, since the scoring function is based on one local minimum in a complex optimization landscape which contains many local minima, the score given by f is more reliable for low scoring (easy) examples than high scoring (difficult) examples, that may be in the vicinity of a different local minimum. Once again we evaluate case 1, using the transfer scoring function and the single step pacing function. We see improvement in the test accuracy in the curriculum test condition which resembles the improvement achieved using the exponential pacing. Results are shown in Fig. 7a. It is important to note that this pacing function ignores most of the prior knowledge provided by the scoring function, as it only uses a small percent of the easiest examples, and yet it achieves competitive results. Thus we see that in our empirical setup, most of the power of CL lies at the beginning of training. Varied exponential pacing. This pacing function allows us to run a CL procedure without the need for further tuning of learning rate. Once again we evaluate case 1, fixing the learning rate parameters to be the same as in the vanilla test condition, while tuning the remaining hyper-parameters as described in Section 2.3 using a grid search with cross-validation. We see improvement in the accuracy throughout the entire learning session, although smaller than the one observed with fixed exponential pacing. However, decreasing the learning rate of the vanilla by a small fraction and then tuning the curriculum parameters achieves results which are very similar to the fixed exponential pacing, suggesting that this method can almost completely nullify the indirect manipulation of the learning rate in the fixed exponential pacing function. These results are shown in Fig. 7b. 3.5 SUMMARY OF RESULTS Fig. 8 summarizes the main results presented in the paper, including: curriculum with an Inceptionbased scoring function for (i) fixed exponential pacing (denoted curriculum), (ii) varied exponential pacing, and (iii) single step pacing. It also shows curriculum with fixed exponential pacing for (iv) self-paced scoring, and (v) self-taught scoring. In addition, we plot the control conditions of vanilla, anti -curriculum, and random. In Fig. 8a we see the learning curves of the above conditions, with inset bars that depict the final accuracy of each condition, and error bars that represent the standard error after 50 repetitions. All the curriculum conditions seem to improve the learning accuracy throughout the entire learning session while converging to similar performance, excluding the selfpaced scoring function which impairs learning. The learning curves shown in Fig. 8a were obtained by searching for the parameters that maximize the final accuracy. This procedure only takes into account a few data points, which makes it less robust. In Fig. 8b we plot the bars of the final accuracy of the learning curves obtained by searching for the parameters that maximize the Area Under the Learning Curve. AUC is positively correlated with high final performance while being more robust. Comparing the different conditions using this maximization criterion gives similar qualitative results - the performance in all the curriculum conditions is still significantly higher than the control conditions. However, now the curriculum based on the Inception-based scoring function with fixed exponential pacing achieves performance that is significantly higher than the other curriculum methods, in evidence that it is more robust. 4 SUMMARY AND DISCUSSION Above we formally defined a curriculum learning algorithm, decomposing it into two separate problems: (i) How to determine the difficulty of the training data (via the scoring function)? (ii) At which pace should the learner be shown the more advanced data (via the pacing function)? We defined a scoring function based on transfer learning from a large network, showing that it can both speed up the rate of learning and improve the final accuracy. This was shown using a number of test cases, including particularly challenging subsets of CIFAR 100 and ImageNet datasets, and the entire CIFAR-10 and CIFAR-100 datasets. We used both a relatively small hand-crafted CNN, and a large public-domain completive VGG-based network. We observed that most of the beneficial effect of CL was achieved at the beginning of the learning and that the benefits were more significant when using harder datasets. During our experiments, we saw that the quality of the teacher network also impacts curriculum learning by transfer. In order for the teacher network to differentiate between easier and harder examples, it should have reasonable generalization accuracy. A teacher with low performance will classify all points as hard, while a ”too good” teacher will classify all points as easy, results in a less efficient curriculum. Based on these observations, we investigated two alternative pacing functions that achieved CL with less overhead as compared to training without a curriculum. In addition to the transfer scoring function, we introduced the self-taught scoring function. This function does not rely on transfer from a large network, and can therefore, presumably, better scale up to larger datasets. Self-Taught scoring is closely related to Self Paced Learning, yet it boils down to essentially the opposite scoring heuristics, since the self-taught scoring function relies on the final hypothesis of a pre-trained network while SPL relies on the current hypothesis. In agreement with the theory reviewed in the introduction, we showed that the self-paced scoring function impaired the learning, while the self-taught scoring function enhanced it. In other words, when choosing easier points to guide the learning, it is important to measure difficulty with respect to the final hypothesis, not the current hypothesis. A ADDITIONAL EMPIRICAL RESULTS CL with other CIFAR-100 super-classes. In Section 3 we present results when learning to discriminate the ”small mammals” superclass of CIFAR-100. Similar results can be obtained for other super-classes of CIFAR-100, including the super-classes of ”people”, ”insects” and ”aquatic mammals”. CL trained on these different super-classes shows the same qualitative results. We note once again that CL is more effective in the harder tasks, namely, the super-classes containing classes that are harder to discriminate (as seen by lower vanilla accuracy). As an example, Fig. 9 shows results using the ”aquatic mammals” dataset, which greatly resembles the results we’ve seen when discriminating the ”small mammals” dataset (cf. Fig.8). Transfer based scoring function. In the experiments described in Section 3, when using the transfer scoring function defined in Section 2.3, we use the pre-trained Inception network available from https://github.com/Hvass-Labs/TensorFlow-Tutorials. We first normalized each training image to the range [−1, 1], resized it, and ran it through the Inception network. We then used the penultimate layer’s activations as features for each training image, resulting in 2048 features per image. Using these features, we trained a Radial Basis Kernel (RBF) SVM (Scholkopf et al., 1997) and used its confidence score to determine the difficulty of each image. The confidence score of the SVM was provided by sklearn.svm.libsvm.predict proba from Python’s Sklearn library and is based on cross-validation. Choosing Inception as the teacher and RBF SVM as the classifier was a reasonable arbitrary choice – the same qualitative results are obtained when using other large networks trained on ImageNet as teachers, and other classifiers to establish a confidence score. Specifically, we repeated the experiments with a transfer scoring function based on the pre-trained VGG-16 and Resnet networks, which are also trained on Imagenet. The curriculum method using the transfer scoring function and fixed exponential pacing function are shown in Fig. 10a, demonstrating the same qualitative results. Similarly, we used a linear SVM instead of the RBF kernel SVM with similar results, as shown in Fig. 10b. We note that the STE error bars are relatively large for the control conditions described above because we only repeated these conditions 5 times each, instead of 50 in the main experiments. B EXTENDED DISCUSSION Self-taught bootstrapping In principle, the self-taught scoring function can be used repeatedly to boost the performance of the network indefinitely: after training the network using a curriculum, we can use its confidence score to define a new scoring function and retrain the network from scratch. However, scoring functions created by repeating this procedure tend to accumulate errors: once an example is misclassified as being easy, this example will be shown more often in subsequent iterations, making it more likely to be considered easy. In practice, we did not observe any benefit to repeated bootstrapping, and even observed the impairment after a large number of repetitions. FAIR COMPARISON IN PARAMETER TUNING When using the moderate size hand-crafted network (cases 1, 2 and 3), learning rate tuning is done for the vanilla case as well. In these cases, for the curriculum, anti-curriculum and random test conditions, we perform a coarse grid search for the pacing hyper-parameters as well as the learning rate hyper-parameters, with an identical range of values for all conditions. For the vanilla condition, there are no pacing hyper-parameters. Therefore, we expand and refine the range of learning rate hyper-parameters in the grid search, such that the total number of parameter combinations for each condition is approximately the same. When using a public domain competitive network (case 4), the published learning rate scheduling is used. Therefore we employ the varied exponential pacing function without additional learning rate tuning and perform a coarse grid search on the pacing hyper-parameters. To ensure a fair comparison, we repeat the experiment with the vanilla condition the same number of times as in the total number of experiments done during grid search, choosing the best results. The exact range of values that are used for each parameter is given below in Appendix C. All prototypical results were confirmed with cross-validation, showing similar qualitative behavior as when using the coarse grid search. LEARNING RATE TUNING To control for the possibility that the results we report are an artifact of the way the learning rate is being scheduled, which is indeed the method in common use, we test other learning rate scheduling methods, and specifically the method proposed by Smith (2017) which dynamically changes the learning rate, increasing and decreasing it periodically in a cyclic manner. We have implemented and tested this method using cases 2 and 3. The final results of both the vanilla and curriculum conditions have improved, suggesting that this method is superior to the naı̈ve exponential decrease with grid search. Still, the main qualitative advantage of the CL algorithm holds now as well - CL improves the training accuracy during all stages of learning. As before, the improvement is more significant when the training dataset is harder. Results for case 3 (CIFAR-100) are shown in Fig. 11. C METHODOLOGY, ADDITIONAL DETAILS Exponential Pacing Throughout this work, we use pacing functions that increase the data size each step exponentially. This is done in line with the customary change of learning rate in an exponential manner. Architecture Details The moderate-size neural network we used for cases 1,2,3, is a convolutional neural network, containing 8 convolutional layers with 32, 32, 64, 64, 128, 128, 256, 256 filters respectively. The first 6 layers have filters of size 3 × 3, and the last 2 layers have filters of size 2 × 2. Every second layer there is a 2 × 2 max-pooling layer and a 0.25 dropout layer. After the convolutional layers, the units are flattened, and there is a fully-connected layer with 512 units followed by 0.5 dropout layer. Batch size was 100. The output layer is a fully connected layer with 5 output units, followed by a softmax layer. We trained the network using the SGD optimizer, with cross-entropy loss. All the code will be published upon acceptance. Grid-search hyper parameters When using grid search, identical ranges of values are used for the curriculum, anti-curriculum and random test conditions. Since vanilla contains fewer parameters to tune – as it has no pacing parameters – we used a finer and broader search range. The range of parameters was similar between different scoring functions and pacing functions and was determined by the architecture and dataset. The range of parameters for case 1: (i) initial learning rate: 0.1 ∼ 0.01; (ii) learning rate exponential decrease 2 ∼ 1.1; (iii) learning rate step size 200 ∼ 800; (iv) step size 20 ∼ 400, for both varied and fixed; (v) increase 1.1 ∼ 3; (vi) starting percent 4% ∼ 15% (note that 4% is in the size of a single mini-batch). For cases 2, 3 the ranges is wider since the dataset is larger: (i) initial learning rate: 0.2 ∼ 0.05; (ii) learning rate exponential decrease 2 ∼ 1.1; (iii) learning rate step size 200 ∼ 800; (iv) step size 100 ∼ 2000, for both varied and fixed; (v) increase 1.1 ∼ 3; (vi) starting percent 0.4% ∼ 15%. For cases 4, the learning rate parameters are left as publicly determined, while the initial learning rate has been decreased by 10% from 0.1 to 0.09. The pacing parameter ranges are: (i) step size 50 ∼ 2500, for both varied and fixed; (ii) increase 1.1 ∼ 2; (iii) starting percent 2% ∼ 20%. ImageNet Dataset Details In case 5, we used a subset of the ImageNet dataset ILSVRC 2012. We used 7 classes of cats, which obtained by picking all the hyponyms of the cat synset that appeared in the dataset. The 7 cat classes were: ’Egyptian cat’, ’Persian cat’, ’cougar, puma, catamount, mountain lion, painter, panther, Felis concolor’, ’tiger cat’, ’Siamese cat, Siamese’, ’tabby, tabby cat’, ’lynx, catamount’. All images were resized to size 56× 56 for faster performance. All classes contained 1300 train images and 50 test images.
1. What is the reviewer's overall assessment of the paper's quality and significance? 2. What are the strengths and weaknesses of the paper, according to the reviewer? 3. Does the reviewer have any concerns regarding the novelty of the proposed method? 4. How does the reviewer evaluate the practical usefulness of the proposed approach? 5. What does the reviewer suggest to enhance the significance of the work?
Review
Review In my opinion this paper is generally of good quality and clarity, modest originality and significance. Strengths: - The experiments are very thorough. Hyperparameters were honestly optimized. The method does show some modest improvements in the experiments provided by the authors. - The analysis of the results is quite insightful. Weaknesses: - The experiments are done on CIFAR-10, CIFAR-100 and subsets of CIFAR-100. These were good data sets a few years ago and still are good data sets to test the code and sanity of the idea, but concluding anything strong based on the results obtained with them is not a good idea. - The authors claim the formalization of the problem to be one of their contributions. It is difficult for me to accept it. The formalization that the authors proposed is basically the definition of curriculum learning. There is no novelty about this. - The proposed method introduces a lot of complexity for very small gains. While these results are scientifically interesting, I don't expect it to be of practical use. - The results in Figure 3 are very far from the state of the art. I realize that they were obtained with a simple network, however, showing improvements in this regime is not that convincing. Even the results with the VGG network are very far from the best available models. - I suggest checking the papers citing Bengio et al. (2009) to find lots of closely related papers. In summary, it is not a bad paper, but the experimental results are not sufficient to conclude that much. Experiments with ImageNet or some other large data set would be advisable to increase significance of this work.
ICLR
Title In Your Pace: Learning the Right Example at the Right Time Abstract Training neural networks is traditionally done by sequentially providing random mini-batches sampled uniformly from the entire dataset. In our work, we show that sampling mini-batches non-uniformly can both enhance the speed of learning and improve the final accuracy of the trained network. Specifically, we decompose the problem using the principles of curriculum learning: first, we sort the data by some difficulty measure; second, we sample mini-batches with a gradually increasing level of difficulty. We focus on CNNs trained on image recognition. Initially, we define the difficulty of a training image using transfer learning from some competitive ”teacher” network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets. We then suggest a bootstrap alternative to evaluate the difficulty of points using the same network without relying on a ”teacher” network, thus increasing the applicability of our suggested method. We compare this approach to a related version of Self-Paced Learning, showing that our method benefits learning while SPL impairs it. 1 INTRODUCTION Teaching complex tasks to humans and animals can be difficult. Often such tasks cannot be grasped by the learners, or ”students”, immediately, and need to be broken down into simpler problems. Therefore, in order to teach complex tasks, teachers are often required to create a curriculum. The curriculum imposes some order on the learning task; it introduces different concepts at different times, hence exploiting previously learned concepts in order to ease the abstraction of new ones. Imposing a curriculum in order to speed up learning is widely used in the context of human learning, and also routinely used in animal training (Skinner, 1958; Pavlov, 2010; Krueger & Dayan, 2009). In many traditional machine learning approaches, known as supervised learning, a target function is estimated by using a set of labeled examples. The examples can be thought of as given by a teacher while the learning algorithm can be thought of as a student. The field of curriculum learning (CL), which is motivated by the idea of a curriculum in human learning, attempts at imposing some structure on the labeled data. Such structure essentially relies on a notion of ”easy” and ”hard” examples and utilizes this distinction in order to teach the student how to generalize easier concepts before harder ones. Empirically, the use of CL has been shown to accelerate and improve the learning process (e.g. Selfridge et al., 1985; Bengio et al., 2009). In this work, we aim at extending the understanding of CL in the context of deep neural learning. More specifically, we wish to understand to what extent curriculum can improve the accuracy and convergence rate of deep neural networks. The main challenge in making CL practical is, arguably, finding a way to construct a good curriculum for a newly unseen database. In order to do so, we investigate two ideas motivated by transfer learning and bootstrapping respectively. When establishing a curriculum for human students, teachers need to arrange the material in a way that will present simple concepts before harder ones, so that the abstraction of simple ideas can help the student grasp more complex ones (Hunkins & Ornstein, 2016). However, sorting the concepts by difficulty is not sufficient. The teacher also needs to attend to the pace by which the material is presented – going over the simple ideas too fast may lead to more confusion than benefit, while moving along too slowly may lead to boredom and unproductive learning (Hunkins & Ornstein, 2016). These principles can also be beneficial when the learner is a neural network. Specifically, formalizing and generalizing what was implicitly done in Weinshall et al. (2018), we decompose the problem of CL and define two separate - but closely related - functions. The first function, termed scoring function, determines the ”hardness” or ”complexity” of each example in the data. The scoring function enables us to sort the data by concept difficulty, allowing us to present to the network the easier (and presumably simpler) examples first. The underlying assumption is that generalization from the easier examples can simplify the learning of harder examples in the data. The second function, termed pacing function, determines the pace by which data is presented to the network. The pace depends on both the data itself and the learner. In our work, we analyze several scoring and pacing functions, investigating their inter-dependency and presenting ways to combine them in order to achieve faster learning and better generalization. The main challenge is, arguably, how to obtain an effective scoring function without additional human supervision. To this end we investigate two approaches, each providing a different estimator for the ideal scoring function: (i) Knowledge transfer. The first scoring function is based on transfer learning from networks trained on the large and versatile Imagenet dataset (Deng et al., 2009; Weinshall et al., 2018). (ii) Bootstrapping. The second scoring function is based on self-tutoring - we train the network once without curriculum, then use the resulting classifier to rank the training data in order to train the same network again from scratch. Both scoring functions are shown in Section 3 to speed up learning and improve the generalization of neural networks. In many approaches, including Self-Paced Learning (SPL), Active-Learning and hard example mining (Kumar et al., 2010; Schein & Ungar, 2007; Shrivastava et al., 2016), the mini-batches which presented to the learner model are as sampled dynamically, based at each time point on the current hypothesis of the model. While in some contexts these approaches are beneficial (Chang et al., 2017; Zhang et al., 2017), they are based on the knowledge of the student at a specific time point. While a student can report what is easy/hard for it right now, it might be oblivious to some aspects of the bigger problem at hand, ignoring concepts which if learned early, could prove helpful in a later time. In the context of linear regression loss, Weinshall et al. (2018) showed that such distinction indeed holds: while it is beneficial to prefer points with lower loss with respect to the target hypothesis as suggested by CL, it is on the other hand beneficial to prefer points with higher loss with respect to the current hypothesis in agreement with hard data mining (Shrivastava et al., 2016) and boosting, contrary to SPL. To examine this somewhat confusing point, we have implemented a simplified version of the procedure described above, where the scoring function is based on the loss of the training points with respect to the current hypothesis, both in ascending and descending orders. These variants of SPL and hard example mining respectively learn slower and reach lower final accuracy when compared to self-taught, throughout all of our experiments. We have also investigated three pacing functions. (i) Fixed exponential pacing presents the learner initially with a small percentage of the data, increasing the amount exponentially every fixed number of learning iterations. (ii) Varied exponential pacing allows the number of iterations in each step to vary as well. (iii) Single-step pacing is a simplified version of the first protocol, where mini-batches are initially sampled from a fixed fraction of the data that includes the easiest examples, after which mini-batches are sampled from the whole data as usual. We show that the three functions have comparable performance, and analyze the complexity of their use. Previous work. While remaining in the fringes of machine learning, there has been some recent work on CL and its applications. Bengio et al. (2009) introduced the idea of CL for machine learning algorithms, showing simple examples where CL benefits learning. Weinshall et al. (2018) proved that CL boosts the speed of convergence in the convex case of linear regression. Otherwise most prior art is empirical, and almost always ranking by difficulty (i.e., the scoring function defined above) is provided by the user based on prior knowledge (in other words, supervision) as in Jesson et al. (2017). In a closely related line of works, a pair of teacher and student networks are trained simultaneously, where mini-batches for the student network are sampled dynamically by the teacher, based on the student’s output in each time point (Jiang et al., 2018; Fan et al., 2018). As opposed to our method, these works base the curriculum on the current hypothesis of the students, and achieve better performance for corrupted (Jiang et al., 2018) or smaller (Fan et al., 2018) datasets, instead of improved generalization on the original dataset. Our contribution, with respect to this previous work, is to provide a formal definition of CL algorithms by way of 2 functions for scoring and pacing, analyze and comparatively evaluate these functions, and show how CL can benefit learning in CNNs even without human supervision about the ranking of examples by difficulty and in a problem-free manner. 2 CURRICULUM LEARNING Curriculum learning deals with the question of how to use prior knowledge about the difficulty of the training examples, in order to sample each mini-batch non-uniformly and thus boost the rate of learning and the accuracy of the final classifier. The paradigm of CL is based on the intuition that it helps the learning process when the learner is presented with simple concepts first. 2.1 NOTATIONS AND DEFINITIONS Let X = {(xi, yi)}Ni=1 denote the data, where xi ∈ Rd denotes a single data point and yi ∈ [K] its corresponding label. Let Fθ : Rd → [K] denote the target classifier (or learner), and mini-batch B ⊆ X denote a subset of X. In the most common training procedure, which is a robust variant of Stochastic Gradient Descent (SGD), Fθ is trained sequentially when given as input a sequence of mini-batches [B1, ...,BM ] (Shalev-Shwartz & Ben-David, 2014). The common approach – denoted vanilla in the following sections – samples each mini-batch Bi uniformly from X. Both in the common approach and in our work, the size of each mini-batch remains constant, to be considered as a hyper-parameter defining the learner. We measure the difficulty of point xi by its minimal loss with respect to the set of optimal hypotheses under consideration. We define a scoring function (or a ”difficulty” function) to be any function f : X → R, and say that example (xi, yi) is more ”difficult” than example (xj , yj) if f (xi, yi) > f (xj , yj). Choosing f is the main challenge of CL, as it encodes the prior knowledge of the teacher. We define a pacing function to be a function gFθ : [M ] → [N ], which may depend on the learner Fθ. The pacing function is used to determine a sequence of subsets X ′ 1, ...,X ′ M ⊆ X, of size |X′i| = gFθ (i), from which {Bi}Mi=1 are sampled uniformly. In CL the i-th subset X ′ i includes the first gFθ (i) elements of the training data when sorted by the scoring function f in an ascending order. Although the choice of the subset can be encoded in the distribution from which each Bi is sampled, adding a pacing function simplifies the exposition and analysis. 2.2 CURRICULUM LEARNING METHOD Together, each scoring function f and pacing function gFθ define a curriculum. Any learning algorithm which uses the ensuing sequence [Bi]Mi=1 is a curriculum learning algorithm. We note that in order to avoid bias when picking a subset of the N examples for some N , it is important to keep the sample balanced with the same number of examples from each class as in the training set. Pseudo-code for the CL algorithm is given in Alg. 1. In order to narrow down the specific effects of using a scoring function based on ascending difficulty level, we examine two control conditions. Specifically, we define 2 additional scoring functions and corresponding algorithms: (i) The anti-curriculum algorithm uses the scoring function f ′ = −f , where the training examples are sorted in a descending order; that results in presenting the harder examples before the easier ones. (ii) The random-curriculum algorithm (henceforth denoted random) uses a scoring function where the training examples are randomly sorted. 2.3 SCORING AND PACING FUNCTIONS We evaluate two scoring functions: (i) Transfer scoring function, computed as follows: First, take the pre-trained Inception network (Szegedy et al., 2016) and run each training image through it, using the activation levels of its penultimate layer as a feature vector (Caruana, 1995). Second, use these features to train a classifier and use its confidence score as the scoring function for each image1. (ii) Self-taught scoring function, computed as follows: First, train the network using uniformly sampled mini-batches (the vanilla method). Second, compute this network’s confidence score for each image to define a scoring function2. Although the pacing function can be any function gFθ : [M ]→ [N ], we limit ourselves to monotonic increasing functions so that the likelihood of the easier examples can only decrease. For simplicity, gFθ is limited to staircase functions. Thus each pacing function is defined by the following hyper-parameters, where step denotes all the learning iterations during which gFθ remains constant: step length - the number of iterations in each step; increase - an exponential factor used to increase the size of the data used for sampling mini-batches in each step; starting percent - the fraction of the data in the initial step. An illustration of these parameters can be seen in Fig. 1. We evaluate three pacing functions: (i) Fixed exponential pacing has a fixed step length, and exponentially increasing data size in each step. Formally, the pacing function is given by: gFθ (i) = min ( starting percent · increaseb i step length c, 1 ) ·N 1Similar results can be obtained when using different confidence scores (e.g, the classifier’s margin), different classifiers (e.g, linear SVM), and different teacher networks (e.g, VGG-16 (Simonyan & Zisserman, 2014), Resnet (He et al., 2016)). For more details, see Appendix A. 2Theoretically we can use this method repeatedly, as discussed in Appendix B. Algorithm 1: Curriculum learning method Input : pacing function gFθ , scoring function f , labeled data X. Output: sequence of mini-batches [ B′1, ...,B ′ M ] . 1 sort X according to f , in ascending order; 2 result← []; 3 for i = 1, ...,M do 4 size← gFθ (i); 5 X′i ← X [1, ..., size]; 6 uniformly sample B′i from X ′ ; 7 append B′i to result; 8 end 9 return result; (ii) Varied exponential pacing, which allows step length to vary as well3: gFθ (i) = min ( starting percent · increase ∑#steps k=1 1[i>step lengthk], 1 ) ·N The total number of steps can be calculated from starting percent and increase: #step = d− logincrease(starting percent)e (iii) Single step pacing, which is a simplification of the staircase function into a step function: gFθ (i) = starting percent 1[i<step length] ·N This function has only 2 hyper-parameters, hence it is simpler to use than the previous two. 3 EMPIRICAL EVALUATION Methodology. All the code used in this work will be published upon acceptance. We define 4 empirical cases: Case 1 replicates the experimental design described in (Weinshall et al., 2018), by using the same dataset and network architecture. The dataset is the ”small mammals” superclass of CIFAR-100 (Krizhevsky & Hinton, 2009), containing a subset of 3000 images from CIFAR100, divided into 5 classes of small mammals (hamster, mouse, rabbit, shrew, squirrel). Each class contains 500 training images and 100 test images. The neural network is a moderate size handcrafted convolutional network, whose architecture details can be found in Appendix C. Cases 2 and 3 adopt the same architecture used above while being applied to the entire CIFAR-10 and CIFAR100 datasets, where the network’s output layer is adjusted to size 10 and 100 respectively. Case 4 uses a public-domain VGG-based architecture4, which achieves competitive results (Simonyan & Zisserman, 2014; Liu & Deng, 2015), to classify the CIFAR-100 dataset. Hyper-parameter tuning. As in all empirical studies involving deep learning, the results are quite sensitive to the values of the hyper-parameters, hence parameter tuning is required. Issues related to how a fair comparison between the different conditions is achieved are discussed in Appendix B. In practice, in order to reduce the computation time of parameter tuning, we varied only the first 2 step length instances in the varied exponential pacing condition. Accordingly, fixed exponential pacing, varied exponential pacing and single step pacing define 3, 5 and 2 new hyper-parameters respectively, referred to henceforth as the pacing hyper-parameters. In the CL framework, the use of a pacing function affects the optimal values of other hyperparameters, in particular, the learning rate. Specifically, since it significantly reduces the size of the data-set from which each mini-batch is sampled, this has the concomitant effect of increasing the effective learning rate. As a result, when using the fixed exponential or the single step pacing functions, the learning rate must be tuned separately for every test condition. As traditionally done (e.g Simonyan & Zisserman, 2014; Szegedy et al., 2016; He et al., 2016), we set an initial learning rate and decrease it exponentially every fixed number of iterations. This method gives rise to 3 learning rate hyper-parameters which require tuning: (i) the initial learning rate; (ii) the factor by which the learning rate is decreased; (iii) the length of each step with constant learning rate5. When varied exponential pacing is used, varying step length has the opposite concomitant effect on the learning rate, as it determines the number of mini-batch samples in each step. Effective tuning of this parameter can make the additional tuning of parameters affecting the learning rate redundant. In practice, in order to reach the improvement achieved by the fixed exponential pacing, we decrease the corresponding learning rate parameters used in the vanilla condition by some small factor6. 3.1 RESULTS: CL BENEFITS LEARNING Case 1: A moderate size network is trained to distinguish 5 classes from CIFAR-100, which are members of the same super-class as defined in the original dataset. Results are shown in Fig. 2. 3In practice, to avoid an unfeasible need to tune too many hyper-parameters, we vary only the first two step length instances and fix the rest. As shown later on, this is reasonable as most of the power of the curriculum lies in the first few steps. 4The code for the VGG network is available at https://github.com/geifmany/cifar-vgg. 5For more details, see Appendix B. 6In the results reported below we used a reduction of 10%, with similar behavior for other nearby choices. Curriculum learning is clearly and significantly beneficial - learning starts faster, and converges to a better solution. We observe that the performance of CL with a random scoring function is similar to vanilla, indicating that the main reason for the improvement achieved by CL is due to its beneficial transfer scoring function. In fact, although tuned separately, the learning rate hyper-parameters for both the random and the curriculum test conditions are very similar, confirming that the improved performance is due to the use of an effective transfer scoring function. To check the robustness of these results, we repeated the same empirical evaluation using different super-classes of CIFAR-100, with similar results (see Appendix A). Interestingly, we note that the observed advantage of CL is more significant when the task is more difficult (i.e. lower vanilla test accuracy). The reason may be that in easier problems there is a sufficient number of easy examples in each mini-batch even without CL. Although the results reported here are based on transfer from the Inception network, we are able to obtain the same results using scoring functions based on transfer learning from other large networks, including VGG-16 and Resnet, as shown in Appendix A. Cases 2 and 3: Similar empirical evaluation as in case 1, using the same moderate size network to classify two benchmark datasets. The results are shown in Fig. 3. Like before, the test accuracy in the curriculum test condition increases faster and achieves better final performance in both cases, as compared to the vanilla test condition. The beneficial effect of CL is larger when classifying the CIFAR-100 dataset, which is a harder dataset. Case 4: Similar empirical evaluation as in case 1, using a competitive public-domain architecture. Specifically, we use the Inception-based transfer scoring function to train a VGG-based network (Liu & Deng, 2015) to classify the CIFAR-100 dataset. Differently from the previous cases, here we use the varied exponential pacing function with a slightly reduced learning rate, as it has the fewest hyper-parameters to tune, an important factor when training such a big network. Results are shown in Fig. 4a (with no data augmentation), showing the same qualitative results as in the previous cases; CL gives a smaller benefit, but the benefit is still significant. Case 5: Similar empirical evaluation as in case 1, using the same moderate size network to distinguish 7 classes of cats from the ImageNet dataset7. The results are shown in Fig. 5. Again, the test accuracy in the curriculum test condition increases faster and achieves better final performance in the curriculum case, as compared to the vanilla test condition. 3.2 SELF-TAUGHT CURRICULUM LEARNING VS. SELF-PACED LEARNING Curriculum learning is closely related to the idea of Self-Paced Learning (SPL), an iterative procedure where higher weights are given to training examples that have lower cost with respect to the current hypothesis. In fact, SPL may appear similar, or closely related, to the idea of self-taught learning. The main difference between the methods is that self-paced learning determines the scoring function according to the loss with respect to the current hypothesis (or network), while the self-taught scoring function is based on the loss with respect to the final hypothesis of a trained network. In accordance, we define the self-paced scoring function, where each point is scored by its 7For more details, see Appendix C loss with respect to the current network. Note that when using CL to optimize the linear regression loss (see introduction), self-taught curriculum and self-paced learning are discordant. To compare the self-taught scoring function and the self-paced scoring function, we investigate their effect on CL in the context of empirical case 1. Results are shown in Fig. 4b. As expected, we see that CL using the self-taught scoring function improves the test accuracy throughout the entire learning session. On the other hand, CL training using the self-paced scoring function decreases the test accuracy throughout. This decrease is more prominent at the beginning of the learning, where most of the beneficial effects of the curriculum are observed, suggesting that the self-paced scoring function can significantly delay learning. 3.3 THE SCORING FUNCTION: ANALYSIS AND EMPIRICAL EVALUATION In order to analyze the effects of transfer based scoring functions, we turn to analyze the gradients of the network’s weights w.r.t the empirical loss. We evaluate the gradients using a pre-trained vanilla network in the context of case 1. First, for each method and each scoring function, we collect the subset of points used to sample the first mini-batch according to the pacing function gFθ (1) 8. For comparison, we also consider the set of all training points, which are used to compute the exact gradient of the empirical loss in batch learning using GD. We then compute the corresponding set of gradients for the training points in each of these subsets of training points, treating each layer’s parameters as a single vector, and subsequently estimate the gradients’ mean and total variance9, used to evaluate the coherence of the gradients in the first mini-batch of each scoring function. The Euclidean distance between the mean gradient in the different conditions is used to estimate the similarity between the different scoring functions, based on the average preferred gradient. We can now compare the set of gradients thus defined using three transfer scoring functions, which differ in the parent network used for scoring the points: ’VGG-16’, ’Resnet’, and ’Inception’. We include in the comparison the gradients of the random scoring function denoted ’Random’, and the gradients of the whole batch of training data denoted ’All’. Results are shown in Fig. 6. We see in Fig. 6a - blue bars - that the average gradient vectors, computed based on the 3 transfer scoring functions, are quite similar to each other. This suggests that they are pointing towards nearby local minima in parameters space. We also see - green bar - that the average gradient vector computed using a random subset of examples resembles the exact empirical gradient computed using all the training data. This suggests that a random subset provides a reasonable estimate of the true empirical gradient. The picture changes completely when we compute - red bars - the distance between the average gradient corresponding to one of the 3 transfer scoring functions, and the average random gradient or the empirical gradient. Now the distances are rather large, which suggests that CL by transfer indeed stirs the weights towards different local minima in parameter space as compared to vanilla training. 8In this experiment gFθ (1) was set such that it corresponds to 10% of the data or 250 examples. This number was set arbitrarily, with similar qualitative results obtained for a large range of other choices. 9As customary, total variance denotes the trace of the covariance matrix. We see in Fig. 6b that the total variance for the 3 transfer scoring functions is much smaller than the total variance of some random subset of the whole training set. This intuitive result demonstrates the difference between training with easier examples and training with random examples, and may – at least partially – explain the need for a different learning rate when training with easier examples. 3.4 ALTERNATIVE PACING FUNCTIONS Single step pacing. Curriculum learning can be costly, and it affects the entire learning protocol via the pacing function. At the same time, we note that the main effect of the procedure takes place at the beginning of training. This empirical observation may be due, in part, to the fact that the proposed scoring function f is based on transfer from another network trained on a different dataset, which only approximates the unknown ideal scoring function. Possibly, since the scoring function is based on one local minimum in a complex optimization landscape which contains many local minima, the score given by f is more reliable for low scoring (easy) examples than high scoring (difficult) examples, that may be in the vicinity of a different local minimum. Once again we evaluate case 1, using the transfer scoring function and the single step pacing function. We see improvement in the test accuracy in the curriculum test condition which resembles the improvement achieved using the exponential pacing. Results are shown in Fig. 7a. It is important to note that this pacing function ignores most of the prior knowledge provided by the scoring function, as it only uses a small percent of the easiest examples, and yet it achieves competitive results. Thus we see that in our empirical setup, most of the power of CL lies at the beginning of training. Varied exponential pacing. This pacing function allows us to run a CL procedure without the need for further tuning of learning rate. Once again we evaluate case 1, fixing the learning rate parameters to be the same as in the vanilla test condition, while tuning the remaining hyper-parameters as described in Section 2.3 using a grid search with cross-validation. We see improvement in the accuracy throughout the entire learning session, although smaller than the one observed with fixed exponential pacing. However, decreasing the learning rate of the vanilla by a small fraction and then tuning the curriculum parameters achieves results which are very similar to the fixed exponential pacing, suggesting that this method can almost completely nullify the indirect manipulation of the learning rate in the fixed exponential pacing function. These results are shown in Fig. 7b. 3.5 SUMMARY OF RESULTS Fig. 8 summarizes the main results presented in the paper, including: curriculum with an Inceptionbased scoring function for (i) fixed exponential pacing (denoted curriculum), (ii) varied exponential pacing, and (iii) single step pacing. It also shows curriculum with fixed exponential pacing for (iv) self-paced scoring, and (v) self-taught scoring. In addition, we plot the control conditions of vanilla, anti -curriculum, and random. In Fig. 8a we see the learning curves of the above conditions, with inset bars that depict the final accuracy of each condition, and error bars that represent the standard error after 50 repetitions. All the curriculum conditions seem to improve the learning accuracy throughout the entire learning session while converging to similar performance, excluding the selfpaced scoring function which impairs learning. The learning curves shown in Fig. 8a were obtained by searching for the parameters that maximize the final accuracy. This procedure only takes into account a few data points, which makes it less robust. In Fig. 8b we plot the bars of the final accuracy of the learning curves obtained by searching for the parameters that maximize the Area Under the Learning Curve. AUC is positively correlated with high final performance while being more robust. Comparing the different conditions using this maximization criterion gives similar qualitative results - the performance in all the curriculum conditions is still significantly higher than the control conditions. However, now the curriculum based on the Inception-based scoring function with fixed exponential pacing achieves performance that is significantly higher than the other curriculum methods, in evidence that it is more robust. 4 SUMMARY AND DISCUSSION Above we formally defined a curriculum learning algorithm, decomposing it into two separate problems: (i) How to determine the difficulty of the training data (via the scoring function)? (ii) At which pace should the learner be shown the more advanced data (via the pacing function)? We defined a scoring function based on transfer learning from a large network, showing that it can both speed up the rate of learning and improve the final accuracy. This was shown using a number of test cases, including particularly challenging subsets of CIFAR 100 and ImageNet datasets, and the entire CIFAR-10 and CIFAR-100 datasets. We used both a relatively small hand-crafted CNN, and a large public-domain completive VGG-based network. We observed that most of the beneficial effect of CL was achieved at the beginning of the learning and that the benefits were more significant when using harder datasets. During our experiments, we saw that the quality of the teacher network also impacts curriculum learning by transfer. In order for the teacher network to differentiate between easier and harder examples, it should have reasonable generalization accuracy. A teacher with low performance will classify all points as hard, while a ”too good” teacher will classify all points as easy, results in a less efficient curriculum. Based on these observations, we investigated two alternative pacing functions that achieved CL with less overhead as compared to training without a curriculum. In addition to the transfer scoring function, we introduced the self-taught scoring function. This function does not rely on transfer from a large network, and can therefore, presumably, better scale up to larger datasets. Self-Taught scoring is closely related to Self Paced Learning, yet it boils down to essentially the opposite scoring heuristics, since the self-taught scoring function relies on the final hypothesis of a pre-trained network while SPL relies on the current hypothesis. In agreement with the theory reviewed in the introduction, we showed that the self-paced scoring function impaired the learning, while the self-taught scoring function enhanced it. In other words, when choosing easier points to guide the learning, it is important to measure difficulty with respect to the final hypothesis, not the current hypothesis. A ADDITIONAL EMPIRICAL RESULTS CL with other CIFAR-100 super-classes. In Section 3 we present results when learning to discriminate the ”small mammals” superclass of CIFAR-100. Similar results can be obtained for other super-classes of CIFAR-100, including the super-classes of ”people”, ”insects” and ”aquatic mammals”. CL trained on these different super-classes shows the same qualitative results. We note once again that CL is more effective in the harder tasks, namely, the super-classes containing classes that are harder to discriminate (as seen by lower vanilla accuracy). As an example, Fig. 9 shows results using the ”aquatic mammals” dataset, which greatly resembles the results we’ve seen when discriminating the ”small mammals” dataset (cf. Fig.8). Transfer based scoring function. In the experiments described in Section 3, when using the transfer scoring function defined in Section 2.3, we use the pre-trained Inception network available from https://github.com/Hvass-Labs/TensorFlow-Tutorials. We first normalized each training image to the range [−1, 1], resized it, and ran it through the Inception network. We then used the penultimate layer’s activations as features for each training image, resulting in 2048 features per image. Using these features, we trained a Radial Basis Kernel (RBF) SVM (Scholkopf et al., 1997) and used its confidence score to determine the difficulty of each image. The confidence score of the SVM was provided by sklearn.svm.libsvm.predict proba from Python’s Sklearn library and is based on cross-validation. Choosing Inception as the teacher and RBF SVM as the classifier was a reasonable arbitrary choice – the same qualitative results are obtained when using other large networks trained on ImageNet as teachers, and other classifiers to establish a confidence score. Specifically, we repeated the experiments with a transfer scoring function based on the pre-trained VGG-16 and Resnet networks, which are also trained on Imagenet. The curriculum method using the transfer scoring function and fixed exponential pacing function are shown in Fig. 10a, demonstrating the same qualitative results. Similarly, we used a linear SVM instead of the RBF kernel SVM with similar results, as shown in Fig. 10b. We note that the STE error bars are relatively large for the control conditions described above because we only repeated these conditions 5 times each, instead of 50 in the main experiments. B EXTENDED DISCUSSION Self-taught bootstrapping In principle, the self-taught scoring function can be used repeatedly to boost the performance of the network indefinitely: after training the network using a curriculum, we can use its confidence score to define a new scoring function and retrain the network from scratch. However, scoring functions created by repeating this procedure tend to accumulate errors: once an example is misclassified as being easy, this example will be shown more often in subsequent iterations, making it more likely to be considered easy. In practice, we did not observe any benefit to repeated bootstrapping, and even observed the impairment after a large number of repetitions. FAIR COMPARISON IN PARAMETER TUNING When using the moderate size hand-crafted network (cases 1, 2 and 3), learning rate tuning is done for the vanilla case as well. In these cases, for the curriculum, anti-curriculum and random test conditions, we perform a coarse grid search for the pacing hyper-parameters as well as the learning rate hyper-parameters, with an identical range of values for all conditions. For the vanilla condition, there are no pacing hyper-parameters. Therefore, we expand and refine the range of learning rate hyper-parameters in the grid search, such that the total number of parameter combinations for each condition is approximately the same. When using a public domain competitive network (case 4), the published learning rate scheduling is used. Therefore we employ the varied exponential pacing function without additional learning rate tuning and perform a coarse grid search on the pacing hyper-parameters. To ensure a fair comparison, we repeat the experiment with the vanilla condition the same number of times as in the total number of experiments done during grid search, choosing the best results. The exact range of values that are used for each parameter is given below in Appendix C. All prototypical results were confirmed with cross-validation, showing similar qualitative behavior as when using the coarse grid search. LEARNING RATE TUNING To control for the possibility that the results we report are an artifact of the way the learning rate is being scheduled, which is indeed the method in common use, we test other learning rate scheduling methods, and specifically the method proposed by Smith (2017) which dynamically changes the learning rate, increasing and decreasing it periodically in a cyclic manner. We have implemented and tested this method using cases 2 and 3. The final results of both the vanilla and curriculum conditions have improved, suggesting that this method is superior to the naı̈ve exponential decrease with grid search. Still, the main qualitative advantage of the CL algorithm holds now as well - CL improves the training accuracy during all stages of learning. As before, the improvement is more significant when the training dataset is harder. Results for case 3 (CIFAR-100) are shown in Fig. 11. C METHODOLOGY, ADDITIONAL DETAILS Exponential Pacing Throughout this work, we use pacing functions that increase the data size each step exponentially. This is done in line with the customary change of learning rate in an exponential manner. Architecture Details The moderate-size neural network we used for cases 1,2,3, is a convolutional neural network, containing 8 convolutional layers with 32, 32, 64, 64, 128, 128, 256, 256 filters respectively. The first 6 layers have filters of size 3 × 3, and the last 2 layers have filters of size 2 × 2. Every second layer there is a 2 × 2 max-pooling layer and a 0.25 dropout layer. After the convolutional layers, the units are flattened, and there is a fully-connected layer with 512 units followed by 0.5 dropout layer. Batch size was 100. The output layer is a fully connected layer with 5 output units, followed by a softmax layer. We trained the network using the SGD optimizer, with cross-entropy loss. All the code will be published upon acceptance. Grid-search hyper parameters When using grid search, identical ranges of values are used for the curriculum, anti-curriculum and random test conditions. Since vanilla contains fewer parameters to tune – as it has no pacing parameters – we used a finer and broader search range. The range of parameters was similar between different scoring functions and pacing functions and was determined by the architecture and dataset. The range of parameters for case 1: (i) initial learning rate: 0.1 ∼ 0.01; (ii) learning rate exponential decrease 2 ∼ 1.1; (iii) learning rate step size 200 ∼ 800; (iv) step size 20 ∼ 400, for both varied and fixed; (v) increase 1.1 ∼ 3; (vi) starting percent 4% ∼ 15% (note that 4% is in the size of a single mini-batch). For cases 2, 3 the ranges is wider since the dataset is larger: (i) initial learning rate: 0.2 ∼ 0.05; (ii) learning rate exponential decrease 2 ∼ 1.1; (iii) learning rate step size 200 ∼ 800; (iv) step size 100 ∼ 2000, for both varied and fixed; (v) increase 1.1 ∼ 3; (vi) starting percent 0.4% ∼ 15%. For cases 4, the learning rate parameters are left as publicly determined, while the initial learning rate has been decreased by 10% from 0.1 to 0.09. The pacing parameter ranges are: (i) step size 50 ∼ 2500, for both varied and fixed; (ii) increase 1.1 ∼ 2; (iii) starting percent 2% ∼ 20%. ImageNet Dataset Details In case 5, we used a subset of the ImageNet dataset ILSVRC 2012. We used 7 classes of cats, which obtained by picking all the hyponyms of the cat synset that appeared in the dataset. The 7 cat classes were: ’Egyptian cat’, ’Persian cat’, ’cougar, puma, catamount, mountain lion, painter, panther, Felis concolor’, ’tiger cat’, ’Siamese cat, Siamese’, ’tabby, tabby cat’, ’lynx, catamount’. All images were resized to size 56× 56 for faster performance. All classes contained 1300 train images and 50 test images.
1. What are the strengths and weaknesses of the paper regarding its contribution to understanding curriculum learning in deep neural networks? 2. Are there any important related works missing from the paper's references? If so, what are they, and how do they contribute to the topic of curriculum learning? 3. How convincing are the results of the paper's comprehensive study on different curriculum strategies, particularly in comparison to previous works? 4. What specific improvements or additions could be made to the paper's methodology or presentation to enhance its clarity and impact?
Review
Review This paper studies an interesting and meaningful topic that what is the potential of curriculum learning (CL) in training dnn. The authors decompose CL into two main parts: scoring function and pacing function. Towards both parts, several candidate functions are proposed and verified. The paper is presented quite clear and gives contribution to better understand CL in the literature of DNN. However, I have several concerns towards the status of this paper. First, quite a few important related works are missing by the authors. Just name a few, [1] studies designing data curriculum by predictive uncertainty. [2,3] studies how to derive data driven curriculum along NN training. In particular, the objective of [2] is exactly “learning the right examples at the right time”. All these three papers focus on, or at least talk about, neural network training. Unfortunately, none of them are compared with, or even referenced. Second, although comprehensive study towards different curriculum strategy are given, I found it largely unconvincing. I tried hard to discover a *detailed accuracy number on a benchmark dataset with unchanged setting* but found only case 4. By ‘unchanged’ I mean it is not a subpart of the whole dataset, or using a rarely seen nn architecture. If it is such `changed’ settings, the results are largely unconvincing since we do not know what the exact baseline is. For the only ‘unchanged’ setting 4 including VGG on CIFAR100, unfortunately the results seem not good (Fig 4a). I understand that some previous work such as the cited [Weinshall et.all 2018] also used the same setting: however it does not mean such settings give *clear and convincing* results of whether CL plays significant role in training DNN. Furthermore, I also expect the results of comparing in terms of wall clock time (including all your bootstrapping training time) but not merely batch numbers. [1] Chang, Haw-Shiuan, Erik Learned-Miller, and Andrew McCallum. "Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples." NIPS. 2017. [2] Fan, Y., Tian, F., Qin, T., Li, X. Y., & Liu, T. Y. Learning to Teach. ICLR 2018 [3] Jiang, Lu, et al. "MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels." ICML. 2018.
ICLR
Title In Your Pace: Learning the Right Example at the Right Time Abstract Training neural networks is traditionally done by sequentially providing random mini-batches sampled uniformly from the entire dataset. In our work, we show that sampling mini-batches non-uniformly can both enhance the speed of learning and improve the final accuracy of the trained network. Specifically, we decompose the problem using the principles of curriculum learning: first, we sort the data by some difficulty measure; second, we sample mini-batches with a gradually increasing level of difficulty. We focus on CNNs trained on image recognition. Initially, we define the difficulty of a training image using transfer learning from some competitive ”teacher” network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets. We then suggest a bootstrap alternative to evaluate the difficulty of points using the same network without relying on a ”teacher” network, thus increasing the applicability of our suggested method. We compare this approach to a related version of Self-Paced Learning, showing that our method benefits learning while SPL impairs it. 1 INTRODUCTION Teaching complex tasks to humans and animals can be difficult. Often such tasks cannot be grasped by the learners, or ”students”, immediately, and need to be broken down into simpler problems. Therefore, in order to teach complex tasks, teachers are often required to create a curriculum. The curriculum imposes some order on the learning task; it introduces different concepts at different times, hence exploiting previously learned concepts in order to ease the abstraction of new ones. Imposing a curriculum in order to speed up learning is widely used in the context of human learning, and also routinely used in animal training (Skinner, 1958; Pavlov, 2010; Krueger & Dayan, 2009). In many traditional machine learning approaches, known as supervised learning, a target function is estimated by using a set of labeled examples. The examples can be thought of as given by a teacher while the learning algorithm can be thought of as a student. The field of curriculum learning (CL), which is motivated by the idea of a curriculum in human learning, attempts at imposing some structure on the labeled data. Such structure essentially relies on a notion of ”easy” and ”hard” examples and utilizes this distinction in order to teach the student how to generalize easier concepts before harder ones. Empirically, the use of CL has been shown to accelerate and improve the learning process (e.g. Selfridge et al., 1985; Bengio et al., 2009). In this work, we aim at extending the understanding of CL in the context of deep neural learning. More specifically, we wish to understand to what extent curriculum can improve the accuracy and convergence rate of deep neural networks. The main challenge in making CL practical is, arguably, finding a way to construct a good curriculum for a newly unseen database. In order to do so, we investigate two ideas motivated by transfer learning and bootstrapping respectively. When establishing a curriculum for human students, teachers need to arrange the material in a way that will present simple concepts before harder ones, so that the abstraction of simple ideas can help the student grasp more complex ones (Hunkins & Ornstein, 2016). However, sorting the concepts by difficulty is not sufficient. The teacher also needs to attend to the pace by which the material is presented – going over the simple ideas too fast may lead to more confusion than benefit, while moving along too slowly may lead to boredom and unproductive learning (Hunkins & Ornstein, 2016). These principles can also be beneficial when the learner is a neural network. Specifically, formalizing and generalizing what was implicitly done in Weinshall et al. (2018), we decompose the problem of CL and define two separate - but closely related - functions. The first function, termed scoring function, determines the ”hardness” or ”complexity” of each example in the data. The scoring function enables us to sort the data by concept difficulty, allowing us to present to the network the easier (and presumably simpler) examples first. The underlying assumption is that generalization from the easier examples can simplify the learning of harder examples in the data. The second function, termed pacing function, determines the pace by which data is presented to the network. The pace depends on both the data itself and the learner. In our work, we analyze several scoring and pacing functions, investigating their inter-dependency and presenting ways to combine them in order to achieve faster learning and better generalization. The main challenge is, arguably, how to obtain an effective scoring function without additional human supervision. To this end we investigate two approaches, each providing a different estimator for the ideal scoring function: (i) Knowledge transfer. The first scoring function is based on transfer learning from networks trained on the large and versatile Imagenet dataset (Deng et al., 2009; Weinshall et al., 2018). (ii) Bootstrapping. The second scoring function is based on self-tutoring - we train the network once without curriculum, then use the resulting classifier to rank the training data in order to train the same network again from scratch. Both scoring functions are shown in Section 3 to speed up learning and improve the generalization of neural networks. In many approaches, including Self-Paced Learning (SPL), Active-Learning and hard example mining (Kumar et al., 2010; Schein & Ungar, 2007; Shrivastava et al., 2016), the mini-batches which presented to the learner model are as sampled dynamically, based at each time point on the current hypothesis of the model. While in some contexts these approaches are beneficial (Chang et al., 2017; Zhang et al., 2017), they are based on the knowledge of the student at a specific time point. While a student can report what is easy/hard for it right now, it might be oblivious to some aspects of the bigger problem at hand, ignoring concepts which if learned early, could prove helpful in a later time. In the context of linear regression loss, Weinshall et al. (2018) showed that such distinction indeed holds: while it is beneficial to prefer points with lower loss with respect to the target hypothesis as suggested by CL, it is on the other hand beneficial to prefer points with higher loss with respect to the current hypothesis in agreement with hard data mining (Shrivastava et al., 2016) and boosting, contrary to SPL. To examine this somewhat confusing point, we have implemented a simplified version of the procedure described above, where the scoring function is based on the loss of the training points with respect to the current hypothesis, both in ascending and descending orders. These variants of SPL and hard example mining respectively learn slower and reach lower final accuracy when compared to self-taught, throughout all of our experiments. We have also investigated three pacing functions. (i) Fixed exponential pacing presents the learner initially with a small percentage of the data, increasing the amount exponentially every fixed number of learning iterations. (ii) Varied exponential pacing allows the number of iterations in each step to vary as well. (iii) Single-step pacing is a simplified version of the first protocol, where mini-batches are initially sampled from a fixed fraction of the data that includes the easiest examples, after which mini-batches are sampled from the whole data as usual. We show that the three functions have comparable performance, and analyze the complexity of their use. Previous work. While remaining in the fringes of machine learning, there has been some recent work on CL and its applications. Bengio et al. (2009) introduced the idea of CL for machine learning algorithms, showing simple examples where CL benefits learning. Weinshall et al. (2018) proved that CL boosts the speed of convergence in the convex case of linear regression. Otherwise most prior art is empirical, and almost always ranking by difficulty (i.e., the scoring function defined above) is provided by the user based on prior knowledge (in other words, supervision) as in Jesson et al. (2017). In a closely related line of works, a pair of teacher and student networks are trained simultaneously, where mini-batches for the student network are sampled dynamically by the teacher, based on the student’s output in each time point (Jiang et al., 2018; Fan et al., 2018). As opposed to our method, these works base the curriculum on the current hypothesis of the students, and achieve better performance for corrupted (Jiang et al., 2018) or smaller (Fan et al., 2018) datasets, instead of improved generalization on the original dataset. Our contribution, with respect to this previous work, is to provide a formal definition of CL algorithms by way of 2 functions for scoring and pacing, analyze and comparatively evaluate these functions, and show how CL can benefit learning in CNNs even without human supervision about the ranking of examples by difficulty and in a problem-free manner. 2 CURRICULUM LEARNING Curriculum learning deals with the question of how to use prior knowledge about the difficulty of the training examples, in order to sample each mini-batch non-uniformly and thus boost the rate of learning and the accuracy of the final classifier. The paradigm of CL is based on the intuition that it helps the learning process when the learner is presented with simple concepts first. 2.1 NOTATIONS AND DEFINITIONS Let X = {(xi, yi)}Ni=1 denote the data, where xi ∈ Rd denotes a single data point and yi ∈ [K] its corresponding label. Let Fθ : Rd → [K] denote the target classifier (or learner), and mini-batch B ⊆ X denote a subset of X. In the most common training procedure, which is a robust variant of Stochastic Gradient Descent (SGD), Fθ is trained sequentially when given as input a sequence of mini-batches [B1, ...,BM ] (Shalev-Shwartz & Ben-David, 2014). The common approach – denoted vanilla in the following sections – samples each mini-batch Bi uniformly from X. Both in the common approach and in our work, the size of each mini-batch remains constant, to be considered as a hyper-parameter defining the learner. We measure the difficulty of point xi by its minimal loss with respect to the set of optimal hypotheses under consideration. We define a scoring function (or a ”difficulty” function) to be any function f : X → R, and say that example (xi, yi) is more ”difficult” than example (xj , yj) if f (xi, yi) > f (xj , yj). Choosing f is the main challenge of CL, as it encodes the prior knowledge of the teacher. We define a pacing function to be a function gFθ : [M ] → [N ], which may depend on the learner Fθ. The pacing function is used to determine a sequence of subsets X ′ 1, ...,X ′ M ⊆ X, of size |X′i| = gFθ (i), from which {Bi}Mi=1 are sampled uniformly. In CL the i-th subset X ′ i includes the first gFθ (i) elements of the training data when sorted by the scoring function f in an ascending order. Although the choice of the subset can be encoded in the distribution from which each Bi is sampled, adding a pacing function simplifies the exposition and analysis. 2.2 CURRICULUM LEARNING METHOD Together, each scoring function f and pacing function gFθ define a curriculum. Any learning algorithm which uses the ensuing sequence [Bi]Mi=1 is a curriculum learning algorithm. We note that in order to avoid bias when picking a subset of the N examples for some N , it is important to keep the sample balanced with the same number of examples from each class as in the training set. Pseudo-code for the CL algorithm is given in Alg. 1. In order to narrow down the specific effects of using a scoring function based on ascending difficulty level, we examine two control conditions. Specifically, we define 2 additional scoring functions and corresponding algorithms: (i) The anti-curriculum algorithm uses the scoring function f ′ = −f , where the training examples are sorted in a descending order; that results in presenting the harder examples before the easier ones. (ii) The random-curriculum algorithm (henceforth denoted random) uses a scoring function where the training examples are randomly sorted. 2.3 SCORING AND PACING FUNCTIONS We evaluate two scoring functions: (i) Transfer scoring function, computed as follows: First, take the pre-trained Inception network (Szegedy et al., 2016) and run each training image through it, using the activation levels of its penultimate layer as a feature vector (Caruana, 1995). Second, use these features to train a classifier and use its confidence score as the scoring function for each image1. (ii) Self-taught scoring function, computed as follows: First, train the network using uniformly sampled mini-batches (the vanilla method). Second, compute this network’s confidence score for each image to define a scoring function2. Although the pacing function can be any function gFθ : [M ]→ [N ], we limit ourselves to monotonic increasing functions so that the likelihood of the easier examples can only decrease. For simplicity, gFθ is limited to staircase functions. Thus each pacing function is defined by the following hyper-parameters, where step denotes all the learning iterations during which gFθ remains constant: step length - the number of iterations in each step; increase - an exponential factor used to increase the size of the data used for sampling mini-batches in each step; starting percent - the fraction of the data in the initial step. An illustration of these parameters can be seen in Fig. 1. We evaluate three pacing functions: (i) Fixed exponential pacing has a fixed step length, and exponentially increasing data size in each step. Formally, the pacing function is given by: gFθ (i) = min ( starting percent · increaseb i step length c, 1 ) ·N 1Similar results can be obtained when using different confidence scores (e.g, the classifier’s margin), different classifiers (e.g, linear SVM), and different teacher networks (e.g, VGG-16 (Simonyan & Zisserman, 2014), Resnet (He et al., 2016)). For more details, see Appendix A. 2Theoretically we can use this method repeatedly, as discussed in Appendix B. Algorithm 1: Curriculum learning method Input : pacing function gFθ , scoring function f , labeled data X. Output: sequence of mini-batches [ B′1, ...,B ′ M ] . 1 sort X according to f , in ascending order; 2 result← []; 3 for i = 1, ...,M do 4 size← gFθ (i); 5 X′i ← X [1, ..., size]; 6 uniformly sample B′i from X ′ ; 7 append B′i to result; 8 end 9 return result; (ii) Varied exponential pacing, which allows step length to vary as well3: gFθ (i) = min ( starting percent · increase ∑#steps k=1 1[i>step lengthk], 1 ) ·N The total number of steps can be calculated from starting percent and increase: #step = d− logincrease(starting percent)e (iii) Single step pacing, which is a simplification of the staircase function into a step function: gFθ (i) = starting percent 1[i<step length] ·N This function has only 2 hyper-parameters, hence it is simpler to use than the previous two. 3 EMPIRICAL EVALUATION Methodology. All the code used in this work will be published upon acceptance. We define 4 empirical cases: Case 1 replicates the experimental design described in (Weinshall et al., 2018), by using the same dataset and network architecture. The dataset is the ”small mammals” superclass of CIFAR-100 (Krizhevsky & Hinton, 2009), containing a subset of 3000 images from CIFAR100, divided into 5 classes of small mammals (hamster, mouse, rabbit, shrew, squirrel). Each class contains 500 training images and 100 test images. The neural network is a moderate size handcrafted convolutional network, whose architecture details can be found in Appendix C. Cases 2 and 3 adopt the same architecture used above while being applied to the entire CIFAR-10 and CIFAR100 datasets, where the network’s output layer is adjusted to size 10 and 100 respectively. Case 4 uses a public-domain VGG-based architecture4, which achieves competitive results (Simonyan & Zisserman, 2014; Liu & Deng, 2015), to classify the CIFAR-100 dataset. Hyper-parameter tuning. As in all empirical studies involving deep learning, the results are quite sensitive to the values of the hyper-parameters, hence parameter tuning is required. Issues related to how a fair comparison between the different conditions is achieved are discussed in Appendix B. In practice, in order to reduce the computation time of parameter tuning, we varied only the first 2 step length instances in the varied exponential pacing condition. Accordingly, fixed exponential pacing, varied exponential pacing and single step pacing define 3, 5 and 2 new hyper-parameters respectively, referred to henceforth as the pacing hyper-parameters. In the CL framework, the use of a pacing function affects the optimal values of other hyperparameters, in particular, the learning rate. Specifically, since it significantly reduces the size of the data-set from which each mini-batch is sampled, this has the concomitant effect of increasing the effective learning rate. As a result, when using the fixed exponential or the single step pacing functions, the learning rate must be tuned separately for every test condition. As traditionally done (e.g Simonyan & Zisserman, 2014; Szegedy et al., 2016; He et al., 2016), we set an initial learning rate and decrease it exponentially every fixed number of iterations. This method gives rise to 3 learning rate hyper-parameters which require tuning: (i) the initial learning rate; (ii) the factor by which the learning rate is decreased; (iii) the length of each step with constant learning rate5. When varied exponential pacing is used, varying step length has the opposite concomitant effect on the learning rate, as it determines the number of mini-batch samples in each step. Effective tuning of this parameter can make the additional tuning of parameters affecting the learning rate redundant. In practice, in order to reach the improvement achieved by the fixed exponential pacing, we decrease the corresponding learning rate parameters used in the vanilla condition by some small factor6. 3.1 RESULTS: CL BENEFITS LEARNING Case 1: A moderate size network is trained to distinguish 5 classes from CIFAR-100, which are members of the same super-class as defined in the original dataset. Results are shown in Fig. 2. 3In practice, to avoid an unfeasible need to tune too many hyper-parameters, we vary only the first two step length instances and fix the rest. As shown later on, this is reasonable as most of the power of the curriculum lies in the first few steps. 4The code for the VGG network is available at https://github.com/geifmany/cifar-vgg. 5For more details, see Appendix B. 6In the results reported below we used a reduction of 10%, with similar behavior for other nearby choices. Curriculum learning is clearly and significantly beneficial - learning starts faster, and converges to a better solution. We observe that the performance of CL with a random scoring function is similar to vanilla, indicating that the main reason for the improvement achieved by CL is due to its beneficial transfer scoring function. In fact, although tuned separately, the learning rate hyper-parameters for both the random and the curriculum test conditions are very similar, confirming that the improved performance is due to the use of an effective transfer scoring function. To check the robustness of these results, we repeated the same empirical evaluation using different super-classes of CIFAR-100, with similar results (see Appendix A). Interestingly, we note that the observed advantage of CL is more significant when the task is more difficult (i.e. lower vanilla test accuracy). The reason may be that in easier problems there is a sufficient number of easy examples in each mini-batch even without CL. Although the results reported here are based on transfer from the Inception network, we are able to obtain the same results using scoring functions based on transfer learning from other large networks, including VGG-16 and Resnet, as shown in Appendix A. Cases 2 and 3: Similar empirical evaluation as in case 1, using the same moderate size network to classify two benchmark datasets. The results are shown in Fig. 3. Like before, the test accuracy in the curriculum test condition increases faster and achieves better final performance in both cases, as compared to the vanilla test condition. The beneficial effect of CL is larger when classifying the CIFAR-100 dataset, which is a harder dataset. Case 4: Similar empirical evaluation as in case 1, using a competitive public-domain architecture. Specifically, we use the Inception-based transfer scoring function to train a VGG-based network (Liu & Deng, 2015) to classify the CIFAR-100 dataset. Differently from the previous cases, here we use the varied exponential pacing function with a slightly reduced learning rate, as it has the fewest hyper-parameters to tune, an important factor when training such a big network. Results are shown in Fig. 4a (with no data augmentation), showing the same qualitative results as in the previous cases; CL gives a smaller benefit, but the benefit is still significant. Case 5: Similar empirical evaluation as in case 1, using the same moderate size network to distinguish 7 classes of cats from the ImageNet dataset7. The results are shown in Fig. 5. Again, the test accuracy in the curriculum test condition increases faster and achieves better final performance in the curriculum case, as compared to the vanilla test condition. 3.2 SELF-TAUGHT CURRICULUM LEARNING VS. SELF-PACED LEARNING Curriculum learning is closely related to the idea of Self-Paced Learning (SPL), an iterative procedure where higher weights are given to training examples that have lower cost with respect to the current hypothesis. In fact, SPL may appear similar, or closely related, to the idea of self-taught learning. The main difference between the methods is that self-paced learning determines the scoring function according to the loss with respect to the current hypothesis (or network), while the self-taught scoring function is based on the loss with respect to the final hypothesis of a trained network. In accordance, we define the self-paced scoring function, where each point is scored by its 7For more details, see Appendix C loss with respect to the current network. Note that when using CL to optimize the linear regression loss (see introduction), self-taught curriculum and self-paced learning are discordant. To compare the self-taught scoring function and the self-paced scoring function, we investigate their effect on CL in the context of empirical case 1. Results are shown in Fig. 4b. As expected, we see that CL using the self-taught scoring function improves the test accuracy throughout the entire learning session. On the other hand, CL training using the self-paced scoring function decreases the test accuracy throughout. This decrease is more prominent at the beginning of the learning, where most of the beneficial effects of the curriculum are observed, suggesting that the self-paced scoring function can significantly delay learning. 3.3 THE SCORING FUNCTION: ANALYSIS AND EMPIRICAL EVALUATION In order to analyze the effects of transfer based scoring functions, we turn to analyze the gradients of the network’s weights w.r.t the empirical loss. We evaluate the gradients using a pre-trained vanilla network in the context of case 1. First, for each method and each scoring function, we collect the subset of points used to sample the first mini-batch according to the pacing function gFθ (1) 8. For comparison, we also consider the set of all training points, which are used to compute the exact gradient of the empirical loss in batch learning using GD. We then compute the corresponding set of gradients for the training points in each of these subsets of training points, treating each layer’s parameters as a single vector, and subsequently estimate the gradients’ mean and total variance9, used to evaluate the coherence of the gradients in the first mini-batch of each scoring function. The Euclidean distance between the mean gradient in the different conditions is used to estimate the similarity between the different scoring functions, based on the average preferred gradient. We can now compare the set of gradients thus defined using three transfer scoring functions, which differ in the parent network used for scoring the points: ’VGG-16’, ’Resnet’, and ’Inception’. We include in the comparison the gradients of the random scoring function denoted ’Random’, and the gradients of the whole batch of training data denoted ’All’. Results are shown in Fig. 6. We see in Fig. 6a - blue bars - that the average gradient vectors, computed based on the 3 transfer scoring functions, are quite similar to each other. This suggests that they are pointing towards nearby local minima in parameters space. We also see - green bar - that the average gradient vector computed using a random subset of examples resembles the exact empirical gradient computed using all the training data. This suggests that a random subset provides a reasonable estimate of the true empirical gradient. The picture changes completely when we compute - red bars - the distance between the average gradient corresponding to one of the 3 transfer scoring functions, and the average random gradient or the empirical gradient. Now the distances are rather large, which suggests that CL by transfer indeed stirs the weights towards different local minima in parameter space as compared to vanilla training. 8In this experiment gFθ (1) was set such that it corresponds to 10% of the data or 250 examples. This number was set arbitrarily, with similar qualitative results obtained for a large range of other choices. 9As customary, total variance denotes the trace of the covariance matrix. We see in Fig. 6b that the total variance for the 3 transfer scoring functions is much smaller than the total variance of some random subset of the whole training set. This intuitive result demonstrates the difference between training with easier examples and training with random examples, and may – at least partially – explain the need for a different learning rate when training with easier examples. 3.4 ALTERNATIVE PACING FUNCTIONS Single step pacing. Curriculum learning can be costly, and it affects the entire learning protocol via the pacing function. At the same time, we note that the main effect of the procedure takes place at the beginning of training. This empirical observation may be due, in part, to the fact that the proposed scoring function f is based on transfer from another network trained on a different dataset, which only approximates the unknown ideal scoring function. Possibly, since the scoring function is based on one local minimum in a complex optimization landscape which contains many local minima, the score given by f is more reliable for low scoring (easy) examples than high scoring (difficult) examples, that may be in the vicinity of a different local minimum. Once again we evaluate case 1, using the transfer scoring function and the single step pacing function. We see improvement in the test accuracy in the curriculum test condition which resembles the improvement achieved using the exponential pacing. Results are shown in Fig. 7a. It is important to note that this pacing function ignores most of the prior knowledge provided by the scoring function, as it only uses a small percent of the easiest examples, and yet it achieves competitive results. Thus we see that in our empirical setup, most of the power of CL lies at the beginning of training. Varied exponential pacing. This pacing function allows us to run a CL procedure without the need for further tuning of learning rate. Once again we evaluate case 1, fixing the learning rate parameters to be the same as in the vanilla test condition, while tuning the remaining hyper-parameters as described in Section 2.3 using a grid search with cross-validation. We see improvement in the accuracy throughout the entire learning session, although smaller than the one observed with fixed exponential pacing. However, decreasing the learning rate of the vanilla by a small fraction and then tuning the curriculum parameters achieves results which are very similar to the fixed exponential pacing, suggesting that this method can almost completely nullify the indirect manipulation of the learning rate in the fixed exponential pacing function. These results are shown in Fig. 7b. 3.5 SUMMARY OF RESULTS Fig. 8 summarizes the main results presented in the paper, including: curriculum with an Inceptionbased scoring function for (i) fixed exponential pacing (denoted curriculum), (ii) varied exponential pacing, and (iii) single step pacing. It also shows curriculum with fixed exponential pacing for (iv) self-paced scoring, and (v) self-taught scoring. In addition, we plot the control conditions of vanilla, anti -curriculum, and random. In Fig. 8a we see the learning curves of the above conditions, with inset bars that depict the final accuracy of each condition, and error bars that represent the standard error after 50 repetitions. All the curriculum conditions seem to improve the learning accuracy throughout the entire learning session while converging to similar performance, excluding the selfpaced scoring function which impairs learning. The learning curves shown in Fig. 8a were obtained by searching for the parameters that maximize the final accuracy. This procedure only takes into account a few data points, which makes it less robust. In Fig. 8b we plot the bars of the final accuracy of the learning curves obtained by searching for the parameters that maximize the Area Under the Learning Curve. AUC is positively correlated with high final performance while being more robust. Comparing the different conditions using this maximization criterion gives similar qualitative results - the performance in all the curriculum conditions is still significantly higher than the control conditions. However, now the curriculum based on the Inception-based scoring function with fixed exponential pacing achieves performance that is significantly higher than the other curriculum methods, in evidence that it is more robust. 4 SUMMARY AND DISCUSSION Above we formally defined a curriculum learning algorithm, decomposing it into two separate problems: (i) How to determine the difficulty of the training data (via the scoring function)? (ii) At which pace should the learner be shown the more advanced data (via the pacing function)? We defined a scoring function based on transfer learning from a large network, showing that it can both speed up the rate of learning and improve the final accuracy. This was shown using a number of test cases, including particularly challenging subsets of CIFAR 100 and ImageNet datasets, and the entire CIFAR-10 and CIFAR-100 datasets. We used both a relatively small hand-crafted CNN, and a large public-domain completive VGG-based network. We observed that most of the beneficial effect of CL was achieved at the beginning of the learning and that the benefits were more significant when using harder datasets. During our experiments, we saw that the quality of the teacher network also impacts curriculum learning by transfer. In order for the teacher network to differentiate between easier and harder examples, it should have reasonable generalization accuracy. A teacher with low performance will classify all points as hard, while a ”too good” teacher will classify all points as easy, results in a less efficient curriculum. Based on these observations, we investigated two alternative pacing functions that achieved CL with less overhead as compared to training without a curriculum. In addition to the transfer scoring function, we introduced the self-taught scoring function. This function does not rely on transfer from a large network, and can therefore, presumably, better scale up to larger datasets. Self-Taught scoring is closely related to Self Paced Learning, yet it boils down to essentially the opposite scoring heuristics, since the self-taught scoring function relies on the final hypothesis of a pre-trained network while SPL relies on the current hypothesis. In agreement with the theory reviewed in the introduction, we showed that the self-paced scoring function impaired the learning, while the self-taught scoring function enhanced it. In other words, when choosing easier points to guide the learning, it is important to measure difficulty with respect to the final hypothesis, not the current hypothesis. A ADDITIONAL EMPIRICAL RESULTS CL with other CIFAR-100 super-classes. In Section 3 we present results when learning to discriminate the ”small mammals” superclass of CIFAR-100. Similar results can be obtained for other super-classes of CIFAR-100, including the super-classes of ”people”, ”insects” and ”aquatic mammals”. CL trained on these different super-classes shows the same qualitative results. We note once again that CL is more effective in the harder tasks, namely, the super-classes containing classes that are harder to discriminate (as seen by lower vanilla accuracy). As an example, Fig. 9 shows results using the ”aquatic mammals” dataset, which greatly resembles the results we’ve seen when discriminating the ”small mammals” dataset (cf. Fig.8). Transfer based scoring function. In the experiments described in Section 3, when using the transfer scoring function defined in Section 2.3, we use the pre-trained Inception network available from https://github.com/Hvass-Labs/TensorFlow-Tutorials. We first normalized each training image to the range [−1, 1], resized it, and ran it through the Inception network. We then used the penultimate layer’s activations as features for each training image, resulting in 2048 features per image. Using these features, we trained a Radial Basis Kernel (RBF) SVM (Scholkopf et al., 1997) and used its confidence score to determine the difficulty of each image. The confidence score of the SVM was provided by sklearn.svm.libsvm.predict proba from Python’s Sklearn library and is based on cross-validation. Choosing Inception as the teacher and RBF SVM as the classifier was a reasonable arbitrary choice – the same qualitative results are obtained when using other large networks trained on ImageNet as teachers, and other classifiers to establish a confidence score. Specifically, we repeated the experiments with a transfer scoring function based on the pre-trained VGG-16 and Resnet networks, which are also trained on Imagenet. The curriculum method using the transfer scoring function and fixed exponential pacing function are shown in Fig. 10a, demonstrating the same qualitative results. Similarly, we used a linear SVM instead of the RBF kernel SVM with similar results, as shown in Fig. 10b. We note that the STE error bars are relatively large for the control conditions described above because we only repeated these conditions 5 times each, instead of 50 in the main experiments. B EXTENDED DISCUSSION Self-taught bootstrapping In principle, the self-taught scoring function can be used repeatedly to boost the performance of the network indefinitely: after training the network using a curriculum, we can use its confidence score to define a new scoring function and retrain the network from scratch. However, scoring functions created by repeating this procedure tend to accumulate errors: once an example is misclassified as being easy, this example will be shown more often in subsequent iterations, making it more likely to be considered easy. In practice, we did not observe any benefit to repeated bootstrapping, and even observed the impairment after a large number of repetitions. FAIR COMPARISON IN PARAMETER TUNING When using the moderate size hand-crafted network (cases 1, 2 and 3), learning rate tuning is done for the vanilla case as well. In these cases, for the curriculum, anti-curriculum and random test conditions, we perform a coarse grid search for the pacing hyper-parameters as well as the learning rate hyper-parameters, with an identical range of values for all conditions. For the vanilla condition, there are no pacing hyper-parameters. Therefore, we expand and refine the range of learning rate hyper-parameters in the grid search, such that the total number of parameter combinations for each condition is approximately the same. When using a public domain competitive network (case 4), the published learning rate scheduling is used. Therefore we employ the varied exponential pacing function without additional learning rate tuning and perform a coarse grid search on the pacing hyper-parameters. To ensure a fair comparison, we repeat the experiment with the vanilla condition the same number of times as in the total number of experiments done during grid search, choosing the best results. The exact range of values that are used for each parameter is given below in Appendix C. All prototypical results were confirmed with cross-validation, showing similar qualitative behavior as when using the coarse grid search. LEARNING RATE TUNING To control for the possibility that the results we report are an artifact of the way the learning rate is being scheduled, which is indeed the method in common use, we test other learning rate scheduling methods, and specifically the method proposed by Smith (2017) which dynamically changes the learning rate, increasing and decreasing it periodically in a cyclic manner. We have implemented and tested this method using cases 2 and 3. The final results of both the vanilla and curriculum conditions have improved, suggesting that this method is superior to the naı̈ve exponential decrease with grid search. Still, the main qualitative advantage of the CL algorithm holds now as well - CL improves the training accuracy during all stages of learning. As before, the improvement is more significant when the training dataset is harder. Results for case 3 (CIFAR-100) are shown in Fig. 11. C METHODOLOGY, ADDITIONAL DETAILS Exponential Pacing Throughout this work, we use pacing functions that increase the data size each step exponentially. This is done in line with the customary change of learning rate in an exponential manner. Architecture Details The moderate-size neural network we used for cases 1,2,3, is a convolutional neural network, containing 8 convolutional layers with 32, 32, 64, 64, 128, 128, 256, 256 filters respectively. The first 6 layers have filters of size 3 × 3, and the last 2 layers have filters of size 2 × 2. Every second layer there is a 2 × 2 max-pooling layer and a 0.25 dropout layer. After the convolutional layers, the units are flattened, and there is a fully-connected layer with 512 units followed by 0.5 dropout layer. Batch size was 100. The output layer is a fully connected layer with 5 output units, followed by a softmax layer. We trained the network using the SGD optimizer, with cross-entropy loss. All the code will be published upon acceptance. Grid-search hyper parameters When using grid search, identical ranges of values are used for the curriculum, anti-curriculum and random test conditions. Since vanilla contains fewer parameters to tune – as it has no pacing parameters – we used a finer and broader search range. The range of parameters was similar between different scoring functions and pacing functions and was determined by the architecture and dataset. The range of parameters for case 1: (i) initial learning rate: 0.1 ∼ 0.01; (ii) learning rate exponential decrease 2 ∼ 1.1; (iii) learning rate step size 200 ∼ 800; (iv) step size 20 ∼ 400, for both varied and fixed; (v) increase 1.1 ∼ 3; (vi) starting percent 4% ∼ 15% (note that 4% is in the size of a single mini-batch). For cases 2, 3 the ranges is wider since the dataset is larger: (i) initial learning rate: 0.2 ∼ 0.05; (ii) learning rate exponential decrease 2 ∼ 1.1; (iii) learning rate step size 200 ∼ 800; (iv) step size 100 ∼ 2000, for both varied and fixed; (v) increase 1.1 ∼ 3; (vi) starting percent 0.4% ∼ 15%. For cases 4, the learning rate parameters are left as publicly determined, while the initial learning rate has been decreased by 10% from 0.1 to 0.09. The pacing parameter ranges are: (i) step size 50 ∼ 2500, for both varied and fixed; (ii) increase 1.1 ∼ 2; (iii) starting percent 2% ∼ 20%. ImageNet Dataset Details In case 5, we used a subset of the ImageNet dataset ILSVRC 2012. We used 7 classes of cats, which obtained by picking all the hyponyms of the cat synset that appeared in the dataset. The 7 cat classes were: ’Egyptian cat’, ’Persian cat’, ’cougar, puma, catamount, mountain lion, painter, panther, Felis concolor’, ’tiger cat’, ’Siamese cat, Siamese’, ’tabby, tabby cat’, ’lynx, catamount’. All images were resized to size 56× 56 for faster performance. All classes contained 1300 train images and 50 test images.
1. What is the main contribution of the paper regarding Curriculum Learning (CL)? 2. How does the proposed approach differ from previous works, specifically Weinshall et al? 3. Can you elaborate on the bootstrapping approach for estimating the scoring function? 4. What are the advantages and disadvantages of using easy/hard examples judged by the current/final hypothesis? 5. Why did the authors choose to use a single-step pacing function, and how does it compare to other pacing functions? 6. How do the results of the experiments compare to previous knowledge, and what insights were gained from them? 7. What are the limitations of the paper's contributions and experimental results?
Review
Review This problem of interest in this paper is Curriculum Learning (CL), in the context of deep learning in particular. CL refers to learning a non-random order of presenting the training examples to the learner, typically with easier examples presented before difficult ones, to guide learning more effectively. This has been shown to both speed up learning and lead to better generalization, especially for more challenging problems. In this paper, they claim that their contribution is to decompose the problem of CL into learning two functions: the scoring function and the pacing function, with the role of the former being to estimate the difficulty of each training example and the latter to moderate the schedule of presenting increasingly more challenging examples throughout training. Overall, I found it hard to understand from reading the paper what exactly is new versus what is borrowed from previous work. In particular, after reading Weinshall et al, I realized that they have already proposed a number of things that are experimented with here: 1) they proposed the approach of transfer learning from a previously-trained network as a means of estimating the ‘scoring function’. 2) they also distinguish between learning to estimate the difficulty of examples, and learning the schedule of decreasing difficulty throughout learning, which is actually stated here as the contribution of this paper. In particular, in Section 3 of Weinshall et al, there is a sub-section named “scheduling the appearance of training examples” where they describe what in the terminology of this paper would be called their pacing function. They experiment with two variants: fixed, and adaptive, which are very similar to two of the pacing functions proposed here. Bootstrapping: A component of this work that didn’t appear in Weinshall et al, is the bootstrapping approach to estimating the scoring function. In general, this involves using the same network that is being trained on the task to estimate the difficulty of the training examples. The authors explain that there are two ways to do this: estimate how easy each training example is with respect to the ‘current hypothesis’ (the weights of the network at the current step), and with respect to the ‘final hypothesis’, which they estimate if I understand correctly as the network at the end of training. The latter would necessitate first training the network in the standard way, and then using it to estimate how easy or hard each example is, and using those estimates to re-train the network from scratch using that curriculum. They refer to the former as self-paced learning and to the latter as self-taught learning. I find these names confusing in that they don’t really convey what the difference is between the two. Further, while self-paced learning has been studied before (e.g. Kuman et al), I’m not sure about self-taught learning. Is this a term that the authors here coined? If not, it would be useful to add a reference. Using easy / hard examples as judged by the current / final hypothesis: When using the current hypothesis, under some conditions, Weinshall et al showed that choosing harder examples is actually more beneficial than easy examples, similar in spirit to hard negative mining. On the other hand, when using the final hypothesis to estimate examples’ difficulty, using a schedule of increasing difficulty is beneficial. Based on this, I have two comments: 1) It would therefore be useful to implement a version that uses the current hypothesis to estimate how easy each example is (like the self-paced scoring function) but then invert these estimates, in effect choosing the most challenging instead of the easiest ones as is done for anti-curriculum learning. This would be a hybrid between the current self-paced scoring function and anti-curriculum scoring function that would essentially implement the hard negative mining technique in this context. 2) It would be useful to comment on the differences between the self-paced scoring function used here, and that in Kumar et al. In particular, in this case using a curriculum based on this scoring function seems to harm training but in Kumar et al, they showed it actually increased performance in a number of different cases. Why does one work but the other doesn’t? Experiments: The experiments are presented in a subset of 5 classes from CIFAR-10 (also used by Weinshall et al.), but also in the full CIFAR-10 and CIFAR-100 datasets. They used both a small CNN (same as in Weinshall et al) as well as a VGG architecture. Overall, their results are comparable to what was previously known: using a curriculum computed by transfer leads to improved learning speed and final performance (though sometimes very slightly) compared to the standard training, and the training with a random curriculum. Further, the benefit is larger when the task is harder (as measured by the final vanilla-trained performance). By computing the distances between the gradients obtained from using a curriculum (via the transfer scoring function) and no curriculum confirms that these two training setups indeed drive the learning in different directions; an analysis similar to Weinshall et al. Also, since, as was previously known and they also observe, the benefit of CL is larger at the beginning of training, they propose a single-step pacing function that performs similarly to other pacing functions while is simpler and more computationally effective. The idea is to decrease only once the proportion of easy examples used in mini-batches, via a step function. Therefore at the start many easy examples are used, and after this threshold is surpassed, few easy examples are used. Overall, I don’t feel the contribution of this paper is large enough to recommend acceptance. The main points that guided this decision are: 1) The relationship with previous work is not clear. In particular, Weinshall et al seems to have already proposed a few components that are claimed to be the contribution of this paper, as elaborated on above. The authors should mention that the transfer scoring function was borrowed from Weinshall et al, clarify the differences between their pacing functions from those in Weinshall et al., etc. 2) The usefulness of using easy or hard experiments when consulting the current or final hypothesis is discussed but not explored sufficiently. An additional experiment is proposed above to add another ‘data point’ to this discussion. 3) self-paced learning is presented as something that doesn’t work and wasn’t expected to work. However, in the past successes were shown with this method, so it would be useful to clarify the difference in setup, and justify this difference. 4) It seems that the experiments resulted to similar conclusions to what was already known. While it’s useful to confirm these findings on additional datasets, I didn’t feel that there was a significant insight gained from them.
ICLR
Title For self-supervised learning, Rationality implies generalization, provably Abstract We prove a new upper bound on the generalization gap of classifiers that are obtained by first using self-supervision to learn a representation r of the training data, and then fitting a simple (e.g., linear) classifier g to the labels. Specifically, we show that (under the assumptions described below) the generalization gap of such classifiers tends to zero if C(g) n, where C(g) is an appropriately-defined measure of the simple classifier g’s complexity, and n is the number of training samples. We stress that our bound is independent of the complexity of the representation r. We do not make any structural or conditional-independence assumptions on the representation-learning task, which can use the same training dataset that is later used for classification. Rather, we assume that the training procedure satisfies certain natural noise-robustness (adding small amount of label noise causes small degradation in performance) and rationality (getting the wrong label is not better than getting no label at all) conditions that widely hold across many standard architectures. We also conduct an extensive empirical study of the generalization gap and the quantities used in our assumptions for a variety of self-supervision based algorithms, including SimCLR, AMDIM and BigBiGAN, on the CIFAR-10 and ImageNet datasets. We show that, unlike standard supervised classifiers, these algorithms display small generalization gap, and the bounds we prove on this gap are often non vacuous. 1 INTRODUCTION The current standard approach for classification is “end-to-end supervised learning” where one fits a complex (e.g., a deep neural network) classifier to the given training set (Tan & Le, 2019; He et al., 2016). However, modern classifiers are heavily over parameterized, and as demonstrated by Zhang et al. (2017), can fit 100% of their training set even when given random labels as inputs (in which case test performance is no better than chance). Hence, the training performance of such methods is by itself no indication of their performance on new unseen test points. In this work, we study a different class of supervised learning procedures that have recently attracted significant interest. These classifiers are obtained by: (i) performing pre-training with a selfsupervised task (i.e., without labels) to obtain a complex representation of the data points, and then (ii) fitting a simple (e.g., linear) classifier on the representation and the labels. Such “Self-Supervised + Simple” (SSS for short) algorithms are commonly used in natural language processing tasks (Devlin et al., 2018; Brown et al., 2020), and have recently found uses in other domains as well (Ravanelli et al., 2020; Liu et al., 2019). Compared to standard “end-to-end supervised learning”, SSS algorithms have several practical advantages. In particular, SSS algorithms can incorporate additional unlabeled data, the representation obtained can be useful for multiple downstream tasks, and they can have improved out-of-distribution performance (Hendrycks et al., 2019). Moreover, recent works show that even without additional unlabeled data, SSS algorithms can get close to state-of-art accuracy in several classification tasks (Chen et al., 2020b; He et al., 2020; Misra & Maaten, 2020; ∗Equal contribution. Email: {ybansal, galkaplun}@g.harvard.edu †Email: b@boazbarak.org. Tian et al., 2019). For instance, SimCLRv2 (Chen et al., 2020b) achieves 79.8% top-1 performance on ImageNet with a variant of ResNet-152, on par with the end-to-end supervised accuracy of this architecture at 80.5%. We show that SSS algorithms have another advantage over standard supervised learning—they often have a small generalization gap between their train and test accuracy, and we prove non-vacuous bounds on this gap. We stress that SSS algorithms use over-parameterized models to extract the representation, and reuse the same training data to learn a simple classifier on this representation. Thus, the final classifier they produce has high complexity by most standard measures, and it is by no means apriori evident that their generalization gap will be small. Our bound is obtained by first noting that the generalization gap of every training algorithm is bounded by the sum of three quantities, which we name the Robustness gap, Rationality gap, and Memorization gap (we call this the RRM bound, see Fact I). We now describe these gaps at a high level, deferring the formal definitions to Section 2. All three gaps involve comparison with a setting where we inject label noise by replacing a small fraction η of the labels with random values. The robustness gap corresponds to the amount by which training performance degrades by noise injection. That is, it equals the difference between the standard expected training accuracy (with no label noise) and the expected training accuracy in the noisy setting; in both cases, we measure accuracy with respect to the original (uncorrupted) labels. The robustness gap is nearly always small, and sometimes provably so (see Section 3). The rationality gap corresponds to the difference between performance on the noisy training samples (on which the training algorithm gets the wrong label) and test samples (on which it doesn’t get any label at all), again with respect to uncorrupted labels. An optimal Bayesian procedure would have zero rationality gap, and indeed this gap is typically zero or small in practice. Since it is a nonstandard quantity, We discuss the rationality gap in Section 3.1, and explain assuming it is small is both well-founded and does not trivialize the question of generalization. The memorization gap, which often accounts for the lion’s share of the generalization gap, corresponds to the difference in the noisy experiment between the training accuracy on the entire train set and the training accuracy on the samples that received the wrong label (both measured with respect to uncorrupted labels). The memorization gap can be thought of as quantifying the extent to which the classifier can “memorize” noisy labels, or act differently on the noisy points compared to the overall train set. The memorization gap is large in standard “end-to-end supervised training”. In contrast, our main theoretical result is that for SSS algorithms, the memorization gap is small if the simple classifier has small complexity, independently of the complexity of the representation. As long as the simple classifier is under-parameterized (i.e., its complexity is asymptotically smaller than the sample size), our bound on the memorization gap tends to zero. When combined with small rationality and robustness, we get concrete non-vacuous generalization bounds for various SSS algorithms on the CIFAR-10 and ImageNet datasets (see Figures 1 and 4). In a nutshell, our results are the following: Theoretical contributions. 1. Our main theoretical result (Theorem II) is that the memorization gap of an SSS algorithm is bounded byO( √ C/n) whereC is the complexity of the simple classifier produced in the “simple fit” stage. This bound is oblivious to the complexity of the representation produced in the pre-training and does not make any assumptions on the relationship between the representation learning method and the supervised learning task. One way to interpret this result is that we give a rigorous bound on the generalization gap of SSS algorithms, under the assumptions that the robustness and rationality gaps are bounded by some small constant (e.g., 5%). As mentioned below, these assumptions hold widely in practice across many different classifiers. Moreover, these assumptions are nontrivial and do not “assume away the difficulty”. Indeed, there are many natural examples of training algorithms for which these assumptions hold but the generalization gap is large. Last, making some assumptions is necessary for a generalization bound to hold for SSS algorithms; see Remark 3.1 and Appendix E. 2. We also give a theoretical justification for the assumption of a small rationality gap, by proving that a positive rationality gap corresponds to “leaving performance on the table”, in the sense that we can transform a learning procedure with a large rationality gap into a procedure with better test performance (Theorem 3.2). Empirical contributions. We complement the theoretical results above with an extensive empirical study of several SSS and end-to-end algorithms on both the CIFAR-10 and ImageNet datasets. 1. We study several top-performing SSS architectures, and show that they all exhibit relatively small generalization gaps on both CIFAR-10 and ImageNet. We stress that we consider the case where the same data is used for both representation learning and classification, and hence it is by no means a-priori obvious that these algorithms should have small generalization gaps. See Figures 1 and 4 for sample results and Section 4 for more details. 2. We also show that the results of Zhang et al. (2017) do not replicate to SSS algorithms, in the sense that such algorithms, despite using an over-parameterized representation, are not able to fit random label noise. 3. We understake an empirical study of the robustness, rationality, and memorization gaps for both SSS and end-to-end supervised learning algorithms. We show that the robustness and rationality gaps are small for all these algorithms, while the memorization gap is small for SSS algorithms but can be large for end-to-end supervised learning. We show that the RRM bound is typically non-vacuous, and in fact, often close to tight, for a variety of SSS algorithms on the CIFAR-10 and ImageNet datasets, including SimCLR (which achieves test errors close to its supervised counterparts). 4. We demonstrate that replacing the memorization gap with the upper bound of Theorem II yields a non-vacuous generalization bound for a variety of SSS algorithms on CIFAR-10 and ImageNet. Moreover, this bound gets tighter with more data augmentation. Related Work. There are many works on generalization bounds for supervised learning (e.g., Golowich et al. (2018); Neyshabur et al. (2017); Bartlett et al. (2017); Dziugaite & Roy (2017); Neyshabur et al. (2018); Cao & Gu (2019), and references therein). The related work section of Arora et al. (2019) contains an extensive discussion of such bounds, and why more often than not the assumptions used do not hold in practice. Indeed, many such bounds give vacuous guarantees for modern architectures (such as the ones considered in this paper) that have the capacity to memorize their entire training set (Zhang et al., 2017). Some non-vacuous bounds are known; e.g., Zhou et al. (2019) gave a 96.5% bound on the error of MobileNet on ImageNet. Belkin et al. (2019); Nagarajan & Kolter (2019) showed some barriers for generalization gaps for standard end-to-end supervised learning. Similarly, standard approaches such as Rademacher complexity cannot directly bound SSS algorithms’ generalization gap(see Remark 3.1). Recently, Saunshi et al. (2019) and Lee et al. (2020) gave generalization bounds for self-supervised based classifiers. The two works considered special cases of SSS algorithms, such as contrastive learning and pre-text tasks. Both works make strong statistical assumptions of (exact or approximate) conditional independence relating the pre-training and classification tasks. For example, if the pre-training task is obtained by splitting a given image x into two pieces (x1, x2) and predicting x2 from x1, then Lee et al. (2020)’s results require x1 and x2 to be approximately independent conditioned on their class y. However, in many realistic cases, the two parts of the same image will share a significant amount of information not explained by the label. Our work applies to general SSS algorithms without such statistical assumptions, at the expense of assuming bounds on the robustness and rationality gaps. There have been works providing rigorous bounds on the robustness gap or related quantities (See Section 3.). However, as far as we know, the rationality gap has not been explicitly defined or studied before. We provide a brief exposition of the various types of SSS methods in Section 4, and a more detailed discussion in Appendix D.1. Paper Organization. Section 2 contains formal definitions and statements of our results. Section 3 provides an overview of prior work and our new results on the three gaps of the RRM bound. In Section 4, we describe our experimental setup and detail our empirical results. Section 5 concludes the paper and discusses important open questions. We defer proofs and additional experimental results to the appendix. Appendix B contains the proof of Theorem II, while Appendix C contains the proof of Theorem 3.2. Appendix D fully details our experimental setup.1 Notation. We use capital letters (e.g., X) for random variables, lower case letters (e.g., x) for a single value, and bold font (e.g., x) for tuples (which will typically have dimension corresponding to the number of samples, denoted by n). We use xi for the i-th element of the tuple x. We use calligraphic letters (e.g., X ,D) for both sets and distributions. 2 FORMAL STATEMENT OF RESULTS A training procedure is a (possibly randomized) algorithm T that takes as input a train set (x,y) = (xi, yi)i∈[n] ∈ (X×Y)n and outputs a classifier f : X → Y . For our current discussion, we make no assumptions on the type of classifier output or the way that it is computed. We denote the distribution over training sets in (X ×Y)n byDtrain and the distribution over test samples in X ×Y byDtest.2 The generalization gap of a training algorithm T with respect to a distribution pair D = (Dtrain,Dtest) is the expected difference between its train accuracy (which we denote by TrainD,T ) and its test performance (which we denote by TestD,T ). We will often drop subscripts such as D, T when they can be inferred from the context. We will also consider the η-noisy experiment, which involves computing the classifier f̃ = T (x, ỹ) where ỹi = yi with probability 1 − η and is uniform over Y otherwise. Our starting point is the following observation which we call the RRM bound (for Robustness, Rationality, and Memorization). The quantities appearing in it are defined in Table 1 and discussed more in depth in Section 3. Fact I (RRM bound). For every noise parameter η > 0, training procedure T and distribution D = (Dtrain,Dtest) over training sets and test samples, the RRM bound with respect to T and D is, 1We provide our code and data in an anonymous repository on: http://github.com/ICLR2021-rep-gen/. 2The train and test data often stem from the same distribution (i.e., Dtrain = Dntest), but not always (e.g., it does not hold if we use data augmentation). Dtest enters the RRM bound only via the rationality gap, so the assumption of small rationality may be affected if Dtrain 6= Dntest, but the RRM bound still holds. Train− Test︸ ︷︷ ︸ Generalization gap ≤ [ Train− Train(η) ] +︸ ︷︷ ︸ Robustnessgap + [ NTrain(η)− Test ] +︸ ︷︷ ︸ Rationality gap + [ Train(η)− NTrain(η) ] +︸ ︷︷ ︸ Memorization gap where we denote x+ = max(x, 0). The RRM bound is but an observation, as it directly follows from the fact that x+ ≥ x for every x. However, it is a very useful one. As mentioned above, for natural algorithms, we expect both the robustness and rationality components of this gap to be small, and hence the most significant component is the memorization gap. Our main theoretical result is a bound on this gap: Theorem II (Memorization gap bound). Let T = (Tpre, Tfit) be an SSS training procedure obtained by first training Tpre on x ∈ Xn to get a representation r : X → R and then training Tfit on (r(x),y) for y ∈ Yn to obtain a classifier g : R → Y , with the final classifier f : X → Y defined as f(x) = g(r(x)). Then, for every noise parameter η > 0 and distribution D over Xn × Yn: Memorization gap(T ) = TrainT,D(η)− NTrainT,D(η) ≤ O( √ Cη(Tfit) n · 1 η ) where Cη(Tfit) is a complexity measure of the second phase training procedure, which in particular is upper bounded by the number of bits required to describe the classifier g (See Definition 2.3.). 2.1 COMPLEXITY MEASURES We now define three complexity measures, all of which can be plugged in as the measure in Theorem II. The first one, Cmdl, is the minimum description length of a classifier in bits. At a first reading, the reader can feel free to skip the description of the other two measures Cpc and Cdc. These are superficially similar to Rademacher Complexity (cf. Bartlett & Mendelson (2002)) in the sense that they capture the ability of the hypothesis to correlate with random noise but crucially depend on the algorithm used rather than the class of concepts (see Remark 3.1). Definition 2.3 (Complexity of training procedures). Let T be a training procedure taking as input a set (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y), we define the following three complexity measures with respect to r,y, η: • The minimum description length of T is defined as Cmdlr,y,η(T ) := H(g) where we consider the model g as a random variable arising in the η-noisy experiment.3 • The prediction complexity of T is defined as Cpcr,y,η(T ) := ∑n i=1 I(g(ri); ỹi) where the ỹi’s are the labels obtained in the η-noisy experiment. • The (unconditional) deviation complexity of T is defined as Cdcr,y,η(T ) := n · I(g(ri) − yi ; ỹi − yi) where the random variables above are taken over i ∼ [n] and subtraction is done modulo |Y|, identifying Y with the set {0, . . . , |Y| − 1}. 3The name “minimum description length” is justified by the operational definition of entropy relating it to the minimum amortized length of a prefix-free encoding of a random variable. Conditioned on y and the choice of the index i, the deviations g(ri)− yi and ỹi − yi determine the predictions g(ri) and noisy labels ỹi, and vice versa. Hence we can think of Cdc as an “averaged” variant of Cpc, where we make the choice of the index i part of the sample space for the random variables. While we expect the two measures to be approximately close, the fact that Cdc takes i into the sample space makes it easier to estimate this quantity in practice without using a large number of executions (See Figure D.2 for convergence rates.). The measure Cmdl is harder to evaluate in practice, as it requires finding the optimal compression scheme for the classifier. Appendix B contains the full proof of Theorem II. It is obtained by showing that: (i) for every r,y, η, and T it holds that Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdlr,y,η(T ), and (ii) for every SSS algorithm T = (Tpre, Tfit) and distribution D = (Dtrain,Dtest), the memorization gap of T is at most√ CdcTpre(x),y,η(Tfit) / ( η √ 2n ) . (1) It is the quantity (1) that we compute in our experiments. 3 THE THREE GAPS We now briefly describe what is known and what we prove about the three components of the RRM bound. We provide some additional discussions in Appendix E, including “counter-examples” of algorithms that exhibit large values for each one of these gaps. The robustness gap. The robustness gap measures the decrease in training accuracy from adding η noisy labels, measured with respect to the clean labels. The robustness gap and related notions such as noise stability or tolerance have been studied in various works (cf. Frénay & Verleysen (2013); Manwani & Sastry (2013)). Interpolating classifiers (with zero train error) satisfy Train(η) ≥ 1 − η and hence their robustness gap is at most η (See left panel of Figure 2). In SSS algorithms, since the representation is learned without using labels, the injection of label noise only affects the simple classifier, which is often linear. Robustness guarantees for linear classifiers have been given previously by Rudin (2005). While proving robustness bounds is not the focus of this paper, we note in the appendix some simple bounds for least-squares minimization of linear classifiers and the (potentially inefficient) Empirical Risk Minimization algorithm (see Appendices F and G). Empirically, we observe that the robustness gap of SSS algorithms is often significantly smaller than η. (See left panels of Figure 2 and Figure 3.) The memorization gap. The memorization gap corresponds to the algorithm’s ability to fit the noise (i.e., the gap increases with the number of fit noisy labels). If, for example, the classifier output is interpolating, i.e., it satisfies f(xi) = ỹi for every i, then accuracy over the noisy samples will be 0 (since for them yi 6= ỹi). In contrast, the overall accuracy will be in expectation at least 1−η which means that the memorization gap will be≈ 1 for small η. However, we show empirically (see right panels of Figures 2 and 3) that the memorization gap is small for many SSS algorithms and prove a bound on it in Theorem II. When combined with small rationality and robustness, this bound results in non-vacuous generalization bounds for various real settings (e.g., 48% for ResNet101 with SimCLRv2 on ImageNet, and as low as 4% for MoCo V2 with ResNet-18 on CIFAR-10). Moreover, unlike other generalization bounds, our bound decreases with data augmentation (See Figure 5.). Remark 3.1 (Memorization vs. Rademacher complexity). The memorization gap, as well the complexity measures defined in Section 2.1 have a superficial similarity to Rademacher complexity (Bartlett & Mendelson, 2002), in the sense that they quantify the ability of the output classifier to fit noise. One difference is that Rademacher complexity is defined with respect to 100% noise, while we consider the η-noisy experiment for small η. A more fundamental difference is that Rademacher complexity is defined via a supremum over all classifiers in some class. The final classifiers of SSS algorithms are obtained by a composition of the complex representation and simple classifier. This composed classifier will in general have high Radamacher complexity, and in particular we would not be able to prove non-vacuous bounds on it using Radamacher complexity. We cannot ignore the complexity of the representation in Radamacher-complexity based analysis of SSS algorithms since the representation is learned using the same data that is later used for classification. In fact, there are examples of SSS algorithms with simple classifiers that have large generalization gaps (see Section 3.1). This shows that Radamacher complexity bounds for the class of simple classifiers cannot, on their own, be used to derive generalization bounds. Zhang et al. (2017) demonstrated a lower bound on the Radamacher complexity of modern deep networks, by showing that modern end-to-end supervised learning algorithm can fit 100% of their label noise. Our experiments show that this is not the case for SSS algorithms, which can only fit 15%-25% of the CIFAR-10 training set when the labels are completely random (See Table D.1 in the appendix.). However, absence of evidence is not evidence of absence, and the fact that empirically SSS algorithms do not fit the noise, does not imply that the Radamacher complexity of the resulting class is small, nor does it, by its own, automatically imply a small generalization gap. 3.1 THE RATIONALITY GAP Unlike the other quantities defined above, the rationality gap is novel and less intuitive, and so we discuss it more in depth. The rationality gap, like all other quantities in the RRM bound, applies to any learning procedure and not only to SSS algorithms. Indeed, our empirical results show that rationality is typically small for both SSS and end-to-end algorithms, and so it is not this gap but rather the memorization gap that accounts for the difference in their generalization behavior. To build intuition for the rationality gap, consider an example of a training procedure T that on input a train set S, has 70% test accuracy and a 10% rationality gap with noise parameter η = 5%. In the η-noisy experiment, the classifier f̃ output by T recovers the original uncorrupted label for 80% of the ≈ η ·n datapoints for which it received the wrong labels. In contrast, 10% rationality gap means the same classifier will only succeed in recovering the label of 70% of unseen test samples. Intuitively, such a classifier is being “irrational” or “inconsistent” in the sense that it succeeds better on datapoints on which it was given the wrong label, then on datapoints on which it was given no label at all. (In error-correcting code parlance, it handles corruption errors better than erasure errors.) We can turn this intuition into a formal argument, by giving a transformation from such a training algorithm T to an algorithm T ′ that achieves roughly 80% test accuracy. On input a fresh unseen datapoint x, the algorithm T ′ chooses a random label ỹ ∼ Y , runs T on the train set S ∪ {(x, ỹ)} to obtain some classifier f̃ , and outputs f̃(x). Up to low-order terms, T ′ will achieve test accuracy at least as good as the performance of T on noisy datapoints, which is 80%. The above reasoning leads to the proof of the following theorem (see also Appendix C): Theorem 3.2 (Performance on the table theorem, informal). For every training procedure T and distribution Dtest, Dtrain = Dntest, there exists a training procedure T ′ satisfying TestT ′ ≥ TestT + rationality gap(T )− robustness gap(T )− o(1). Why do natural algorithms have a small rationality gap? Empirically, the rationality gap is often small or zero for both SSS and end-to-end supervised learning algorithms, particularly for betterperforming ones. (See middle panels of Figure 2 and Figure 3.) Theorem 3.2 provides an “economic explanation” for this phenomenon: a rational agent would not use a classifier with a positive rationality gap since this amounts to “leaving performance on the table”. However, this transformation comes at a high computational cost; inference for the classifier produced by T ′ is as expensive as retraining from scratch. Hence Theorem 3.2 does not fully explain why natural algorithms tend to have small rationality gap. In this paper we take low rationality gap as an empirically-justifiable assumption. We believe that both proving that natural algorithms have small rationality gaps, as well as coming up with computationally efficient transformations to extract performance from rationality gaps, are important open questions. Does assuming small rationality gap trivialize generalization? Since the definition of the rationality gap involves the test accuracy, the reader might wonder if assuming small rationality is not tantamount to assuming a small generalization gap. However, there is nothing “irrational” about a large generalization gap, and indeed many excellent classifiers have 100% train accuracy. In contrast, it is irrational to “leave performance on the table” and use a classifier with test accuracy pwhen it can be transformed into one with significantly better accuracy. Concretely, our empirical studies show that the rationality gap is uniformly small, even for end-to-end classifiers that have large generalization gaps. Hence, by itself, rationality is not enough to guarantee small generalization gap. Is assuming small rationality gap even needed? Since SSS algorithms use simple classifiers, the reader may wonder why we need the small-rationality gap assumption and cannot directly prove generalization bounds using standard tools such as Rademacher complexity. The issue is that the representation used by SSS algorithms is still sufficiently over-parameterized to allow memorizing the training set. As a pedagogical example, consider a representation-learning procedure that maps a label-free training set x to a representation r : X → R under which the differently labeled x’s are linearly separable. Moreover, suppose that the representation space has dimension much smaller than n, and hence a linear classifier would have small complexity under any reasonable measure. Without access to the labels, we can transform r to a representation r′ that on input x outputs r(x) if x is in the training set, and outputs the all-zero vector (or another trivial value) otherwise. Given sufficiently many parameters, the representation r′ (or a close-enough approximation) can be implemented by a neural network. Since r and r′ are identical on the training set, a learning procedure using r′ will have the same train accuracy and (small) memorization gap. However, the generalization gap of such a procedure will be large, since it will not achieve better than trivial accuracy on unseen test examples. The issue here is not that the representation “memorizes” the train set. Representations of practical SSS algorithms are highly over-parameterized and are quite likely to memorize specific aspects of the training set. Rather, the issue is that the representation artificially behaves differently on test points in a way that decreases its performance. It is the latter property that makes the classifier “irrational”, and violates the small rationality gap assumption. 4 EMPIRICAL STUDY OF THE RRM BOUND In support of our theoretical results, we conduct an extensive empirical study of the three gaps and empirically evaluate the bound from Equation (1) for a variety of SSS algorithms for the CIFAR10 and ImageNet datasets. We provide a summary of our setup and findings below. For a full description of the algorithms and hyperparameters, see Appendix D. SSS Algorithms (Tpre, Tfit). We consider various self-supervised training algorithms that learn a representation without explicit training labels. In our study, we include methods based on contrastive learning such as Instance Discrimination (Wu et al., 2018), MoCoV2 (He et al., 2020), SimCLR (Chen et al., 2020a;b), AMDIM (Bachman et al., 2019), CMC (Tian et al., 2019), InfoMin (Tian et al., 2020) as well as adversarial methods such as BigBiGAN (Donahue & Simonyan, 2019). For the second phase of training (also known as the evaluation phase (Goyal et al., 2019)), we consider simple models such as regularized linear regression, or small Multi-Layer Perceptrons (MLPs). For each evaluation method, we run two experiments: 1) the clean experiment where we train Tfit on the data and labels (x,y); 2) the η-noisy experiment where we train Tfit on (x, ỹ) where ỹ are the η noised labels. Unless specified otherwise we set the noise to η = 5%. Adding augmentations. We investigate the effect of data augmentation on the three gaps and the theoretical bound. For each training point, we sample t random augmentations (t = 10 unless stated otherwise) and add it to the train set. Note that in the noisy experiment two augmented samples of the same original point might be assigned with different labels. We use the same augmentation used in the corresponding self-supervised training phase. Results. Figures 1 and 2 provide a summary of our experimental results for CIFAR-10. The robustness and rationality gaps are close to zero for most SSS algorithms, while the memorization gap is usually the dominant term, especially so for models with larger generalization gap. Moreover, we see that Cdc often produces a reasonably tight bound for the memorization gap, leading to a generalization bound that can be as low as 5-10%. In Figures 3 and 4 we give a summary of our experimental results for SSS algorithms on ImageNet. Again, the rationality and robustness gaps are bounded by small constants. Notice, that adding augmentations reduces memorization, but may lead to an increase in the rationality gap. This is also demonstrated in Figure 5 where we vary the number of data augmentations systematically for one SSS algorithm (AMDIM) on CIFAR-10. Since computing the Theorem II bound for ImageNet is computationally expensive (See Appendix D.5.1.) we compute it only for two algorithms, which achieve a non-vacuous generalization bound of 48%. 5 CONCLUSIONS AND OPEN QUESTIONS This work demonstrates that SSS algorithms have small generalization gaps. While our focus is on the memorization gap, our work motivates more investigation of both the robustness and rationality gaps. In particular, we are not aware of any rigorous bounds for the rationality gap of SSS algorithms, but we view our “performance on the table” theorem (Theorem 3.2) as a strong indication that it is close to zero for natural algorithms. Given our empirical studies, we believe the assumptions of small robustness and rationality conform well to practice. Our numerical bounds are still far from tight, especially for ImageNet, where evaluating the bound (more so with augmentations) is computationally expensive. Nevertheless, we find it striking that already in this initial work, we get non-vacuous (and sometimes quite good) bounds. Furthermore, the fact that the empirical RRM bound is often close to the generalization gap, shows that there is significant room for improvement. Overall, this work can be viewed as additional evidence for the advantages of SSS algorithms over end-to-end supervised learning. Moreover, some (very preliminary) evidence shows that end-toend supervised learning implicitly separates into a representation learning and classification phases (Morcos et al., 2018). Understanding the extent that supervised learning algorithms implicitly perform SSS learning is an important research direction in its own right. To the extent this holds, our work might shed light on such algorithms’ generalization performance as well. 6 ACKNOWLEDGEMENTS We thank Dimitris Kalimeris, Preetum Nakkiran, and Eran Malach for comments on early drafts of this work. This work supported in part by NSF award CCF 1565264, IIS 1409097, DARPA grant W911NF2010021, and a Simons Investigator Fellowship. We also thank Oracle and Microsoft for grants used for computational resources. Y.B is partially supported by MIT-IBM Watson AI Lab. Work partially performed while G.K. was an intern at Google Research. A MUTUAL INFORMATION FACTS Lemma A.1 . If A,B are two Bernoulli random variables with nonzero expectation then |E[A|B = 1]− E[A]| ≤ √ 1 2I(A;B)/E[B] Proof. A standard relation between mutual information and KL-divergence gives I(A;B) = DKL(pA,B ||pApB). On the other hand, by the Pinsker inequality, sup S⊆{0,1}×{0,1} |pA,B(S)− pA×B(S)| ≤ √ 1 2 DKL(pA,B ||pApB) = √ 1 2 I(A,B). Thus (letting S = {(1, 1)}), |Pr[A = 1, B = 1]−Pr[A = 1]Pr[B = 1]| ≤ √ 1 2I(A,B). Consequently, |E[A|B = 1]− E[A]| ≤ √ 1 2I(A,B))/E(B) Lemma A.2 . For three random variables W,X, Y , s.t. X and Y are independent, I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ) Proof. Using the chain rule for mutual information we have: I(W ;X,Y ) = I(W ;X) + I(W ;Y |X) Since X,Y are independent, H(Y |X) = H(Y ) and since conditioning only reduces entropy, we have H(Y |W,X) ≤ H(Y |W ). Combining the two we get, I(W ;Y |X) = H(Y |X)−H(Y |W,X) ≥ H(Y )−H(Y |W ) = I(W ;Y ) Thus we have that I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ). Note that by induction we can extend this argument to show that I(W ;X1, ..., Xn) ≥ ∑ I(W ;Xi) where Xi are mutually independent. B SIMPLE CLASSIFIERS IMPLY SMALL MEMORIZATION GAP In this appendix we we prove our main theoretical result (Theorem B.4). We will start by giving a formal definition of SSS algorithms and restating the definition of our complexity measures. Definition B.1 (SSS Algorithms, restated). An SSS algorithm over (X × Y)n is a procedure T = (Tpre, Tfit) that takes as input a set (x,y) and operates as follows: 1. Tpre takes the (label free) data points x as input and outputs a representation r : X → R for some setR; 2. On input the points {(r(xi), yi)}ni=1, Tfit outputs a simple classifier g : R :→ Y; 3. The output is a classifier f : X → Y defined as f(x) = g(r(x)) for every x ∈ X . We now restate the definitions of our complexity measure. Definition B.2 (Complexity of training procedures, restated). Let T be a training procedure taking as input (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y): • The minimum description length of T with respect to r,y, η is defined as Cmdlr,y,η(T ) = H(g) where g is the random variable T (r, ỹ) in the η noisy experiment. • The prediction complexity of T with respect to r,y, η is defined as, Cpcr,y,η(T ) := n∑ i=1 I(g(ri); ỹi) where g(ri) and ỹi are viewed as random variables over the sample space induced by choosing ỹ according to the η-noisy experiment w.r.t. y and letting g = T (x, ỹ). • The deviation complexity of T with respect to r,y, η is defined as Cdcr,y,η(T ) := n · I(∆;N) where ∆ = g(ri) − yi (mod |Y|) and N = ỹi − yi (mod |Y|) are random variables taken over both the above sample space and the choice of i ∼ [n] and identifying Y with {0, . . . , |Y| − 1}. The following theorem shows that Cdc is upper bounded by Cpc, which in turn is bounded by the operational entropy of g. Theorem B.3 (Relation of complexity measures). For every r,y, η > 0, and T Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdl(T ) where g is the classifier output by T (considered as a random variable). Proof. Fix T, r,y, η. We get ỹ by choosing i.i.d random variables N1, . . . , Nn, each equalling 0 with probability 1− η and uniform otherwise, and letting ỹi = yi +Ni (mod |Y|). We start by proving the second inequality Cpcr,y,η(T ) ≤ H(g). Let g = T (r, ỹ) and define p = (g(r1), . . . , g(rn)) be the vector of predictions. Then, Cpcr,y,η(T ) = ∑ i I(pi; ỹi) = ∑ i I(pi;Ni) (2) with the last equality holding since for fixed yi, Ni determines ỹi and vice versa. However, since the full vector p contains only more information than pi, the right-hand side of (2) is at most∑n i=1 I(p;Ni) ≤ I(p ; N1, . . . , Nn), using the fact that Ni random variables are independent (see Lemma A.2). For a fixed r, the value of p is completely determined by g and hence the entropy of p is at most H(g), establishing the second inequality of the theorem. We now turn to the first inequality Cdcr,y,η(T ) ≤ Cpcr,y,η(T ). Let ∆i = pi − yi (mod |Y|). Then, 1 nC pc r,y,η(T ) = E j∼[n] I(pj ;Nj) = E j∼[n] I(∆j ;Nj) (3) since pi determines ∆i and vice versa. But, since Nj = N |i = j and ∆j = ∆|i = j (where N,∆ are the random variables defined in Definition B.2), the right-hand side of (3) equals E j∼[n] I(∆;N |i = j) = E j∼[n] H(N |i = j)−H(N |∆, i = j) . (4) Since N1, . . . , Nn are identically distributed, H(N |i = j) = H(N) which means that the righthand side of (4) equals H(N)− E j∼[n] H(N |∆, i = j) ≥ H(N)−H(N |∆) = I(∆;N) with the inequality holding since on average conditioning reduces entropy. By definition I(∆;N) = 1 nC dc r,y,η(T ), establishing what we wanted to prove. The complexity measures Cpc and Cdc are defined with respect to a fixed train set (r,y), rendering them applicable for single training sets such as CIFAR-10 and ImageNet that arise in practice. If D is a distribution over (r,y), then we define the complexity measures Cpc and Cdc with respect toD as the average of the corresponding measure with respect to (r,y) ∼ D. We now restate Theorem II: Theorem B.4 (Theorem II, restated). Let T = (Tpre, Tfit) be a training procedure obtained by first training Tpre on x ∈ Xn to obtain a representation r : X → R and then training Tfit on (r(x),y)) where y ∈ Yn to obtain a classifier g : R → Y . Then, for every noise parameter η > 0 and distribution Dtrain over (X ,Y)n, Memorization gap(T ) = TrainDtrain,T (η)− NTrainDtrain,T (η) ≤ √ CdcDr,η(Tfit) 2n · 1 η where Dr is the distribution over (R×Y)n induced by Tpre on Dtrain. Note that the bound on the right-hand side is expressed only in terms of the complexity of the second stage Tfit and is independent of the complexity of Tpre. The crux of the proof is showing (close to) independence between the corrupted indices and prediction deviation of g resulting from the noise. Proof. Let (r,y) be sampled by first drawing (x,y) ∼ Dtrain over (X×Y)n then applying r = r(x) where r = Tpre(x). Consider the sample space of sampling ỹ according to the η-noisy distribution with respect to Y , computing g = Tfit(r, ỹ), and sampling i ∼ [n]. We define the following two Bernoulli random variables over this sample space: Z = 1∆=0 = { 1 g(Ri) = yi 0 otherwise ; B = 1N 6=0 = { 1 ỹi 6= yi 0 otherwise . For a given r,y, since Z is determined by ∆ and B is determined by N , I(Z;B) ≤ I(∆;N) = Cdcr,y,η(Tfit)/n. By Lemma A.1, for every Bernoulli random variables B,Z |E[Z]− E[Z|B = 1]| ≤ √ 1 2I(Z;B)/E[B] And hence in our case (since E[B] = η), E[Z]− E[Z|B = 1] ≤ √ Cdcr,y,η(Tfit) 2n · 1 η . But E[Z] corresponds to the probability that g(r) = y for (r, y) in the train set, while E[Z|B = 1] corresponds to this probability over the noisy samples. Hence the memorization gap is bounded by E (r,y)∼Dr [√ Cdcr,y,η(Tfit) 2n · 1 η ] ≤ 1η √ E (r,y)∼Dr [ Cdcr,y,η(Tfit) 2n ] = √ CdcR,η(Tfit) 2n · 1 η using the Jensen inequality and the concavity of square root for the first inequality. C POSITIVE RATIONALITY GAP LEAVES ROOM FOR IMPROVEMENT In this appendix, we prove the “performance on the table theorem” that states that we can always transform a robust training procedure with a positive rationality gap into a training procedure with better performance: Theorem C.1 (Performance on the table theorem, restated). For every training procedure T and Dtest, n, η, if Dtrain = Dntest there exists a training procedure S such that TestS,D,n ≥ NTrainT,D,n(η)− o(1) (5) where o(1) is a term that vanishes with n, and under the assumption that TrainT,D,n(η) ≥ NTrainT,D,n(η). For any reasonable training procedure T , performance on noisy train samples will not be better than the overall train accuracy, and hence the assumption will be satisfied. In particular (since we can always add noise to our data), the above means that we can obtain a procedure S′ whose clean test performance is at least TestT + ∆ where ∆ = NTrainT (η)− TestT is the rationality gap of T . Hence if the rationality gap is larger than the robustness gap, we can use the above to improve the test performance of “irrational” networks. (Note that the robustness gap of almost all standard training procedure is at most η and in fact often much smaller.) We stress that the procedure of Theorem 3.2, while running in “polynomial time”, is not particularly practical, since it makes inference be as computationally expensive as training. However, it is a proof of concept that irrational networks are, to some extent, “leaving performance on the table”. Proof. Let T be a procedure as above. Our algorithm S would be the following: • Training: The algorithm will not do any training but on input labelsD = {(xi, ỹi)} simply stores these labels. • Inference: On input a data point x, Algorithm S will choose i ∈ [n] at random, and run T on the data D replacing the i-th sample with (x, ỹ) where ỹ is chosen uniformly at random. The output is f(x) where f is the classifier output by T First note that while the number of noisy samples could change by one by replacing (xi, yi) with (x, ỹ), since this number is distributed according to the Binomial distribution with mean ηn and standard deviation √ (1− η)ηn 1, this change can affect probabilities by at most o(1) additive factor. If Y has k classes, then with probability 1− 1/k we will make (x, ỹ) noisy (y 6= ỹ) in which case the expected performance on it will be NTrainT (η). With probability 1/k, we choose the correct label y in which case performance on this sample will be equal to the expected performance on clean samples which by our assumptions is at least NTrainT (η) as well. D EXPERIMENTAL DETAILS We perform an empirical study of the RRM bound for a wide variety of self-supervised training methods on the ImageNet (Deng et al., 2009) and CIFAR-10 (Krizhevsky et al., 2009) training datasets. We provide a brief description of all the self-supervised training methods that appear in our results below. For each method, we use the official pre-trained models on ImageNet wherever available. Since very few methods provide pre-trained models for CIFAR-10, we train models from scratch. The architectures and other training hyper-parameters are summarized in Table H.4 and Table H.3. Since our primary aim is to study the RRM bound, we do not optimize for reaching the state-of-the-art performance in our re-implementations. For the second phase of training, we use L2-regularized linear regression, or small non-interpolating Multi-layer perceptrons (MLPs). D.1 SELF-SUPERVISED TRAINING METHODS (TPRE) There are a variety of self-supervised training methods for learning representations without explicit labels. The two chief classes of self-supervised learning methods are: 1. Contrastive learning: These methods seek to find an embedding of the dataset that pushes a positive pair of images close together and a pair of negative images far from each other. For example, two different augmented versions of the same image may be considered a positive pair, while two different images may be considered a negative pair. Different methods such as Instance Discrimination, MoCo, SimCLR, AMDIM, differ in the the way they select the positive/negative pairs, as well other details like the use of a memory bank or the encoder architecture. (See Falcon & Cho (2020) for detailed comparison of these methods). 2. Handcrafted pretext tasks: These methods learn a representation by designing a fairly general supervised task, and utilizing the penultimate or other intermediate layers of this network as the representation. Pretext tasks include a variety of methods such as predicting the rotation angle of an input image (Gidaris et al., 2018), solving jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016), denoising images (Vincent et al., 2008) or image inpainting (Pathak et al., 2016). Additionally, adversarial image generation can be used for by augmenting a the image generator with an encoder (Donahue & Simonyan, 2019). We focus primarily on contrastive learning methods since they achieve state-of-the-art performance. We now describe these methods briefly. Instance Discrimination: (Wu et al., 2018) In essence, Instance Discrimination performs supervised learning with each training sample as a separate class. They minimize the non-parametric softmax loss given below for the training dataset. J(θ) = − n∑ i=1 log ( exp(vTi v/τ)∑n j=1 exp(v T i v/τ) ) (6) where vi = fθ(xi) is the feature vector for the i-th example. They use memory banks and a contrastive loss (also known as Noise Contrastive Estimation or NCE (Gutmann & Hyvärinen, 2010)) for computing this loss efficiently for large datasets. So in this case, a positive pair is an image and itself, while a negative pair is two different training images. Momentum Contrastive (MoCo): (He et al., 2020) MoCo replaces the memory bank in Instance Discrimination with a momentum-based query encoder. MoCoV2 (Chen et al., 2020c) uses various modifications from SimCLR, like a projection head, and combines it with the MoCo framework for improved performance. AMDIM: (Bachman et al., 2019) AMDIM uses two augmented versions of the same image. For these augmentations, they use random resized crops, random jitters in color space, random horizontal flip and random conversion to grayscale. They apply the NCE loss across multiple scales, by using features from multiple layers. They use a modified ResNet by changing the receptive fields to decrease overlap between positive pairs. CMC: (Tian et al., 2019) CMC creates two views for contrastive learning by converting each image into the Lab color space. L and ab channels from the same image are considered to be a positive pair, while those from two different images are considered to be a negative pair. PiRL: (Misra & Maaten, 2020) PiRL first creates a jigsaw transformation of an image (it divides an image into 9 patches and shuffles these patches). It treats an image and its jigsaw as a positive pair, and that of a different image as a negative pair. They additionally modify encoder on the jigsaw branch. SimCLRv1 and SimCLRv2: (Chen et al., 2020a;b) SimCLR also use strong augmentations to create positive and negative pairs. They use random resized crops, random Gaussian blur and random jitters in color space. Crucially, they use a projection head that maps the representations to a 128- dimensional space where they apply the contrastive loss. They do not use a memory bank, but use a large batch size. InfoMin: InfoMin uses random resized crop, color jitter and gaussian blur, as well as jigsaw shuffling from PiRL. D.2 SIMPLE CLASSIFIER (TFIT) After training the representation learning method, we extract representations r for the training and test images. We do not add random augmentations to the training images (unless stated otherwise). Then, we train a simple classifier on the dataset {r(xi), yi}ni=1. We use a linear classifier in most cases, but we also try a small multi-layer perceptron (as long as it has few parameters and does not interpolate the training data). We add weight decay in some methods to achieve good test accuracy (See Table H.4 for values for each method.) For the noisy experiment, we set the noise level to η = 5%. To compute the complexity bound Cdc we run 20 trials of the noisy experiment for CIFAR10 and 50 trials for ImageNet. D.3 EXPERIMENTAL DETAILS FOR EACH PLOT Figure 1. This figure shows the robustness, rationality and memorization gap for various SSS algorithms trained on CIFAR-10. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.3. For the second phase Tfit, we use L2regularized linear regression for all the methods. For each algorithm listed in Table H.3, the figure contains 2 points, one without augmentations, and one with augmentations. Further, we compute the complexity measure Cdc for all the methods. All the values (along with the test accuracy) are listed in Table H.1. Figure 2. This figure shows the robustness, rationality and memorization for CIFAR-10 for all the same methods as in Figure 1. We only include the points without augmentation to show how rationality behaves when (Dtrain,Dtest) are identical. All the values (along with the test accuracy) are listed in Table H.1. For the supervised architectures, we train a Myrtle-5 (Page, 2018) convolutional network, a ResNet-18 (He et al., 2016) and a WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with standard training hyperparameters. Figure 3 and Figure 4. These figures show the robustness, rationality and memorization for the ImageNet dataset. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.4. For the second phase Tfit, we use L2-regularized linear regression for all the methods. The figures also contain some points with 10 augmentations per training image. Further, we compute the complexity measure Cdc for all three methods - SimCLRv2 with architectures ResNet-50-1x and ResNet-101-2x. All the values (along with the test accuracy) are listed in Table H.2. Figure 5 This figure shows the effect of increasing augmentations. We add t = {2, ..., 10} augmentations and re-train the simple classifier. We do this for the CIFAR-10 dataset, AMDIM selfsupervised training with the AMDIM encoder and linear regression (See Table H.3 for the hyperparameters). D.4 ADDITIONAL RESULTS D.4.1 GENERALIZATION ERROR OF SSS ALGORITHMS To show that SSS algorithms have qualitatively different generalization behavior compared to standard end-to-end supervised methods, we repeat the experiment from Zhang et al. (2017). We randomize all the training labels in the CIFAR-10 dataset and train 3 high-performing SSS methods on these noisy labels. For results see Table D.1. Unlike fully supervised methods, SSS algorithms do not achieve 100% training accuracy on the dataset with noisy labels. In fact, their training accuracies are fairly low (≈ 15-25%). This suggests that the empirical Rademacher complexity is bounded. The algorithms were trained without any augmentations during the simple fitting phase for both SSS and supervised algorithms. The SSS methods were trained using parameters described in Table H.3. D.5 RRM BOUND WITH VARYING NOISE PARAMETER We now investigate the effect of varying noise levels on the three gaps as well as on the complexity. We see that the robustness gap increases as we add more noise—this is expected as noise should affect the clean training accuracy. We also observe that the memorization gap decreases, suggesting that Cdcη as a function of η goes down faster than η 2 (see Appendix B). The Theorem II bound on memorization gap also decays strongly with the η, becoming more tight as the noise increases. D.5.1 CONVERGENCE OF COMPLEXITY MEASURES We now plot the complexity measures Cdc and Cpc with increasing number of trials for one of the SSS algorithms. As expected, Cdc < Cpc and Cdc converges in about 20 trials for CIFAR-10. On the other hand, the complexity computations for ImageNet need many more trials for convergence, since it contains about 10 augmentations×1.2 million training samples making it cost prohibitive to compute for all the methods. For the CIFAR-10, we use AMDIM with the AMDIM encoder architecture without augmentations. For ImageNet, we use SimCLRv2 with the ResNet-101 architecture with 10 augmentations per training sample. E EXAMPLES OF ALGORITHMS WITH LARGE GAPS While we argued that SSS algorithms will tend to have small robustness, rationality, and memorization gaps, this does not hold in the worst case and there are examples of such algorithms that exhibit large gaps in each of those cases. E.1 LARGE ROBUSTNESS GAP Large robustness gap can only arise via computational (as opposed to statistical) considerations. That is, if a training procedure outputs a classifier f ∈ F that achieves on average accuracy α on a clean train set (X,Y ), then with high probability, if (X, Ỹ ) is an η-noisy train set then there exists f ∈ F that achieves α(1− η) accuracy on this train set (by fitting only the “clean” points). However, the training algorithm might not always be able to find such a classifier. For example, if the distribution has the form (x, y) = (x, ∑ ajxj mod 2) where x ∼ GF (2)` = Z`2 and a ∈ GF (2)` is some hidden vector, then there is an efficient algorithm (namely Gaussian elimination) to find a given the samples (x, y) and hence get accuracy 1. However, for every ε > 0 and η > 0, there is no known efficient algorithm that, given a 1− η perturbed equations of the form {〈a, xi〉 = ỹi}i∈[n] finds a′ ∈ GF (2)` such that ∑ a′jxj = ∑ ajxj mod 2 on a 1/2 + ε fraction of the x’s. This is known as the learning parity with noise (LPN) problem (Blum et al., 1993). The assumption of robustness is necessary for a small generalization gap, in the sense that we can come up with (contrived) examples of algorithms that have small rationality and memorization gaps while still having large generalization gap. For example, consider an algorithm T that has large generalization gap (high train accuracy and small test accuracy) , and suppose we augment to the following algorithm T ′(x,y) = { T (x,y) if y is “clean” 0 if y is “noisy” where 0 denotes the constant zero function (e.g., some trivial classifier) and we use some algorithm to estimate whether or not the labels are noisy. (Such estimates can often be achieved in many natural cases.) The algorithm T ′ will inherit the generalization gap of T , since that depends only on the experiment without noise. Since performance on noisy and clean training samples will be the same (close to random), will have zero memorization gap. Since we have assumed small test accuracy, it will have zero rationality gap also. E.2 LARGE RATIONALITY GAP As discussed in Section C, in the case that Dtrain = Dntest, a robust algorithm with large rationality gap leaves “performance on the table”. We can obtain such algorithms by artificially dropping performance on the test data. For example, in the SSS framework, since the representation r is over-parameterized and can memorize the entire train set, we can consider the trivial representation r(x) = { x x in train set 0 otherwise If we now train some simple classifier on r(x) then it can have non-trivial performance on the noisy train samples, while getting trivial accuracy on all samples outside the train set. In cases where Dtrain and Dtest are different (for example when Dtrain is an augmented version of Dtest) then we can no longer claim that a large rationality gap corresponds to “leaving performance on the table”. For example, we do observe (mild) growth in the rationality gap as we add more augmented points to the training set. E.3 LARGE MEMORIZATION GAP It is not hard to find examples of networks with large memorization gap. Indeed, as mentioned before, any standard interpolating supervised learning algorithm will get a memorization gap close to 1. F ROBUSTNESS OF LEAST SQUARES CLASSIFIERS One can prove robustness for classes of algorithms under varying assumptions. As a simple example, we record here a self-contained observation of how margin leads to robustness in least squares minimization. (We believe that this bound is folklore, but weren’t able to find the right reference.) This is a very simple but also pessimistic bound, and much better ones often hold. Lemma F.1 . Let x1, . . . , xn ∈ Rd and y1, . . . , yn ∈ [k], and consider a linear function f : Rd → Rk that minimizes the quantity ∑ i∈[n],j∈[k] |f(xi)j−1yi=j |2, and suppose that for p fraction of the i’s, the maximum over j ∈ [k] of f(xi) is γ larger than the second-largest value. Then in expectation, if we let ỹ be the η-noisy version of y and f̃ minimizes ∑ i∈[n],j∈[k] |f̃(xi)j − 1ỹi=j |2, we get that arg maxj f̃(xi) = yi for at least p− 4η/γ2 fraction of the i’s. Proof. We identify y with its “one hot” encoding as a vector in Rnk. Let V ⊆ Rnk be the subspace of all vectors of the form (g(x1), . . . , g(xn)) for linear g : Rd → Rk. If f is the minimizer in the theorem statement, and p = (f(x1), . . . , f(xn)) then p = ΠV y where ΠV is the orthogonal projection to the subspace v. If f̃ is the minimizer for the noisy labels and p̃ = (f̃(x1), . . . , f̃(xn)), then p̃ = ΠV ỹ = ΠV (y + e) where e is the noise vector ỹ − y. Hence ‖p − p̃‖ = ‖ΠV e‖ ≤ ‖e‖. But in expectation ‖e‖2 ≤ 2ηn (since we flip a label with probability ≤ η). For every point i for which the margin was at least γ in p, if p̃’s prediction is different in i, then the contribution of the i-th block to their square norm difference is at least γ2/2 (by shifting the maximum coordinate by −γ/2 and the second largest one by γ/2). Hence at most 4ηn/γ2 of these points could have different predictions in p and p̃ G ROBUSTNESS OF EMPIRICAL RISK MINIMIZER The (potentially inefficient) algorithm that minimizes the classification errors is always robust. Lemma G.1 . Let T (x,y) = arg minf∈F ∑n i=1 1f(xi)6=yi . Then for every η > 0, Robustness gap(T ) ≤ 2η . Proof. Let x,y be any train set, and let α = ming∈F ∑n i=1 1g(xi)6=yi and f be the minimizer of this quantity. Let ỹ be the η-noisy version of y and let η̃ be the fraction of i on which yi 6= ỹi. Then, n∑ i=1 1f(xi)6=yi ≤ α+ η̃ . (7) Hence if f̃ is the minimizer of (7) then we know that f̃(xi) 6= ỹi for at most α + η̃ fraction of the i’s, and so f̃(xi) 6= yi for at most α + 2η̃ fraction of the i’s. Since the train accuracy of T is 1− α and in expectation of η̃ is η, we get that in expectation TrainT (η) ≥ TrainT − 2η H LARGE TABLES Table H.1 – Summary of all the methods, architectures and the corresponding results (gaps and accuracies) on CIFAR-10, sorted by generalization gap. While Figure 1 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc mocov2 resnet18 True -7.35 0.07 0.21 0.00 3.47 0.28 67.19 mocov2 wide resnet50 2 True -6.37 0.18 1.03 0.00 7.63 1.21 70.99 mocov2 resnet101 True -6.01 0.15 0.71 0.00 6.38 0.86 68.58 mocov2 resnet50 True -5.38 0.19 0.84 0.00 6.99 1.03 69.68 simclr resnet50 True -2.89 0.30 0.55 0.00 6.63 0.85 91.96 amdim resnet101 True -0.91 0.64 3.70 0.00 25.99 4.34 63.56 amdim resnet18 True 0.33 0.23 1.15 0.00 8.66 1.38 62.84 mocov2 resnet18 False 1.43 0.15 1.24 0.03 14.14 1.43 67.60 simclr resnet18 False 1.43 0.28 0.79 0.36 13.35 1.43 82.50 amdim wide resnet50 2 True 1.60 0.69 2.46 0.00 19.20 3.15 64.38 simclr resnet50 False 1.97 0.22 0.78 0.97 15.75 1.97 92.00 simclr resnet50 False 2.24 0.52 1.71 0.01 19.53 2.24 84.94 mocov2 resnet50 False 2.72 0.30 2.96 0.00 24.18 3.26 70.09 mocov2 resnet101 False 2.82 0.33 3.03 0.00 22.78 3.36 69.08 mocov2 wide resnet50 2 False 3.11 0.38 2.79 0.00 22.39 3.18 70.84 amdim resnet50 bn True 3.69 0.84 4.22 0.00 31.12 5.06 66.44 amdim resnet18 False 4.34 0.42 4.58 0.00 33.47 5.00 62.28 amdim amdim encoder True 4.43 0.68 0.36 3.39 10.32 4.43 87.33 amdim amdim encoder False 6.68 2.08 5.69 0.00 70.52 7.77 87.38 amdim resnet101 False 12.46 1.22 14.26 0.00 100.00 15.49 62.43 amdim wide resnet50 2 False 13.07 1.70 15.33 0.00 100.00 17.03 63.80 amdim resnet50 bn False 14.73 1.81 16.63 0.00 100.00 18.43 66.28 Table H.2 – Summary of all the methods, architectures their corresponding results (gaps and accuracies) on ImageNet, sorted by generalization gap. While Figure 4 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc simclrv2 r50 1x sk0 True -2.34 0.26 0.68 0.00 46.93 0.94 70.96 simclrv2 r101 2x sk0 True 0.63 0.10 0.80 0.00 47.90 0.91 77.24 simclrv2 r152 2x sk0 True 1.00 0.13 0.77 0.10 NA 1.00 77.65 moco ResNet-50 True 1.32 0.57 0.93 0.00 NA 1.49 70.15 InfoMin ResNet-50 True 4.88 0.81 1.01 3.06 NA 4.88 72.29 PiRL ResNet-50 True 6.23 0.29 0.99 4.95 NA 6.23 60.56 InsDis ResNet-50 True 6.85 0.25 1.13 5.46 NA 6.85 58.30 simclrv2 r101 1x sk1 False 8.23 0.71 4.66 2.86 NA 8.23 76.07 InfoMin ResNet-50 False 10.21 2.34 8.96 0.00 NA 11.31 70.31 simclrv2 r152 1x sk0 False 10.32 1.12 6.93 2.26 NA 10.32 74.17 simclrv2 r101 1x sk0 False 10.53 1.11 6.99 2.42 NA 10.53 73.04 simclrv2 r50 1x sk0 False 10.62 0.99 7.31 2.31 NA 10.62 70.69 moco ResNet-50 False 10.72 1.82 7.86 1.04 NA 10.72 68.39 simclrv2 r152 2x sk0 False 10.92 0.75 7.45 2.72 NA 10.92 77.25 simclrv2 r101 2x sk0 False 11.02 0.74 7.51 2.78 NA 11.02 76.72 simclr ResNet50 1x False 11.07 1.22 7.73 2.13 NA 11.07 68.73 simclrv2 ResNet-50 False 11.16 0.64 7.67 2.85 NA 11.16 74.99 PiRL ResNet-50 False 11.43 1.49 8.26 1.68 NA 11.43 59.11 InsDis ResNet-50 False 12.02 1.40 8.52 2.10 NA 12.02 56.67 amdim ResNet-50 False 13.62 0.90 9.72 3.01 NA 13.62 67.69 CMC ResNet-50 False 14.73 2.30 12.30 0.13 NA 14.73 54.60 bigbigan ResNet-50 False 29.60 3.13 25.19 1.27 NA 29.60 50.24 Table H.3 – Summary of training methods with their hyper-parameters for CIFAR-10 Selfsupervised method Backbone Architectures Self-supervised Training Evaluation Simple Phase Optimization AMDIM AMDIM Encoder PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-18 ResNet-50 WideResNet-50 ResNet 101 MoCoV2 ResNet-18 PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-50 WideResNet-50 ResNet 101 SimCLR ResNet-18 Batchsize = 128 Epochs 200 Linear SGD Momentum = 0.9 Constant LR = 0.1 Weight decay 1e-6 ResNet-50 ResNet-50 Batchsize = 512Epochs 600 Table H.4 – Summary of training methods with their hyper-parameters for ImageNet Self-supervised method Backbone Architecture Pre-trained Model Evaluation Optimization Weight Decay Epochs Instance Discrimination ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 MoCo ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 PiRL ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 CMC ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 AMDIM AMDIM Encoder Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-3 40 BigBiGAN ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-5 40 SimCLRv1 ResNet-50 1x Official Linear SGDMomentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-50 4x SimCLRv2 ResNet-50 1x SK0 Official Linear SGD Momentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-101 2x SK0ResNet-152 2x SK0 ResNet-152 3x SK0
1. What is the primary contribution of the paper regarding generalization error? 2. What are the three independent components of generalization error discussed in the paper, and how do they relate to self-supervised learning? 3. How does the paper bound the memorization error of simple algorithms in terms of their information-theoretic complexity? 4. What is the significance of the experimental study conducted in the paper, and how does it relate to the theoretical results? 5. Are there any limitations or potential extensions of the paper's findings regarding different types of noise, such as attribute noise or adversarial noise?
Review
Review This paper gives a new perspective on generalization, motivated by the success of self-supervised learning, especially on noisy data. They view the generalization error as consisting of 3 independent components: robustness, rationality and memorization. Informally, robustness measures the degradation in training accuracy due to the addition of noise. rationality measures the gap between noisy training and test accuracy and memorization is the gap between the training error on the uncorrupted and corrupted examples. The main point of the paper is that the memorization gap is smaller for self-supervised, simple algorithms compared to full supervised algorithms. They prove that when the noise is defined by a small fraction of labels being randomly flipped, then the memorization error of such simple algorithms can be bounded in terms of their information-theoretic complexity, independent of the representation they produce. (The proof is simple once the definitions are set up properly). The most interesting part of the paper is an experimental study of several training procedures on benchmark data sets, measuring the three notions of error, matching what their theoretical result and overall motivation suggests. While perhaps not directly relevant to the practice of ML, this paper gives some explanation of the success of self-supervision in a toy setting, and is likely to inspire further research. --- You study random label noise. What about attribute noise? And what is the noise is not random but adversarial? Do at least the notions still make sense? --- the rationality gap is still confusing, and your dog/cat example does not help. More intuition/better examples would be great.
ICLR
Title For self-supervised learning, Rationality implies generalization, provably Abstract We prove a new upper bound on the generalization gap of classifiers that are obtained by first using self-supervision to learn a representation r of the training data, and then fitting a simple (e.g., linear) classifier g to the labels. Specifically, we show that (under the assumptions described below) the generalization gap of such classifiers tends to zero if C(g) n, where C(g) is an appropriately-defined measure of the simple classifier g’s complexity, and n is the number of training samples. We stress that our bound is independent of the complexity of the representation r. We do not make any structural or conditional-independence assumptions on the representation-learning task, which can use the same training dataset that is later used for classification. Rather, we assume that the training procedure satisfies certain natural noise-robustness (adding small amount of label noise causes small degradation in performance) and rationality (getting the wrong label is not better than getting no label at all) conditions that widely hold across many standard architectures. We also conduct an extensive empirical study of the generalization gap and the quantities used in our assumptions for a variety of self-supervision based algorithms, including SimCLR, AMDIM and BigBiGAN, on the CIFAR-10 and ImageNet datasets. We show that, unlike standard supervised classifiers, these algorithms display small generalization gap, and the bounds we prove on this gap are often non vacuous. 1 INTRODUCTION The current standard approach for classification is “end-to-end supervised learning” where one fits a complex (e.g., a deep neural network) classifier to the given training set (Tan & Le, 2019; He et al., 2016). However, modern classifiers are heavily over parameterized, and as demonstrated by Zhang et al. (2017), can fit 100% of their training set even when given random labels as inputs (in which case test performance is no better than chance). Hence, the training performance of such methods is by itself no indication of their performance on new unseen test points. In this work, we study a different class of supervised learning procedures that have recently attracted significant interest. These classifiers are obtained by: (i) performing pre-training with a selfsupervised task (i.e., without labels) to obtain a complex representation of the data points, and then (ii) fitting a simple (e.g., linear) classifier on the representation and the labels. Such “Self-Supervised + Simple” (SSS for short) algorithms are commonly used in natural language processing tasks (Devlin et al., 2018; Brown et al., 2020), and have recently found uses in other domains as well (Ravanelli et al., 2020; Liu et al., 2019). Compared to standard “end-to-end supervised learning”, SSS algorithms have several practical advantages. In particular, SSS algorithms can incorporate additional unlabeled data, the representation obtained can be useful for multiple downstream tasks, and they can have improved out-of-distribution performance (Hendrycks et al., 2019). Moreover, recent works show that even without additional unlabeled data, SSS algorithms can get close to state-of-art accuracy in several classification tasks (Chen et al., 2020b; He et al., 2020; Misra & Maaten, 2020; ∗Equal contribution. Email: {ybansal, galkaplun}@g.harvard.edu †Email: b@boazbarak.org. Tian et al., 2019). For instance, SimCLRv2 (Chen et al., 2020b) achieves 79.8% top-1 performance on ImageNet with a variant of ResNet-152, on par with the end-to-end supervised accuracy of this architecture at 80.5%. We show that SSS algorithms have another advantage over standard supervised learning—they often have a small generalization gap between their train and test accuracy, and we prove non-vacuous bounds on this gap. We stress that SSS algorithms use over-parameterized models to extract the representation, and reuse the same training data to learn a simple classifier on this representation. Thus, the final classifier they produce has high complexity by most standard measures, and it is by no means apriori evident that their generalization gap will be small. Our bound is obtained by first noting that the generalization gap of every training algorithm is bounded by the sum of three quantities, which we name the Robustness gap, Rationality gap, and Memorization gap (we call this the RRM bound, see Fact I). We now describe these gaps at a high level, deferring the formal definitions to Section 2. All three gaps involve comparison with a setting where we inject label noise by replacing a small fraction η of the labels with random values. The robustness gap corresponds to the amount by which training performance degrades by noise injection. That is, it equals the difference between the standard expected training accuracy (with no label noise) and the expected training accuracy in the noisy setting; in both cases, we measure accuracy with respect to the original (uncorrupted) labels. The robustness gap is nearly always small, and sometimes provably so (see Section 3). The rationality gap corresponds to the difference between performance on the noisy training samples (on which the training algorithm gets the wrong label) and test samples (on which it doesn’t get any label at all), again with respect to uncorrupted labels. An optimal Bayesian procedure would have zero rationality gap, and indeed this gap is typically zero or small in practice. Since it is a nonstandard quantity, We discuss the rationality gap in Section 3.1, and explain assuming it is small is both well-founded and does not trivialize the question of generalization. The memorization gap, which often accounts for the lion’s share of the generalization gap, corresponds to the difference in the noisy experiment between the training accuracy on the entire train set and the training accuracy on the samples that received the wrong label (both measured with respect to uncorrupted labels). The memorization gap can be thought of as quantifying the extent to which the classifier can “memorize” noisy labels, or act differently on the noisy points compared to the overall train set. The memorization gap is large in standard “end-to-end supervised training”. In contrast, our main theoretical result is that for SSS algorithms, the memorization gap is small if the simple classifier has small complexity, independently of the complexity of the representation. As long as the simple classifier is under-parameterized (i.e., its complexity is asymptotically smaller than the sample size), our bound on the memorization gap tends to zero. When combined with small rationality and robustness, we get concrete non-vacuous generalization bounds for various SSS algorithms on the CIFAR-10 and ImageNet datasets (see Figures 1 and 4). In a nutshell, our results are the following: Theoretical contributions. 1. Our main theoretical result (Theorem II) is that the memorization gap of an SSS algorithm is bounded byO( √ C/n) whereC is the complexity of the simple classifier produced in the “simple fit” stage. This bound is oblivious to the complexity of the representation produced in the pre-training and does not make any assumptions on the relationship between the representation learning method and the supervised learning task. One way to interpret this result is that we give a rigorous bound on the generalization gap of SSS algorithms, under the assumptions that the robustness and rationality gaps are bounded by some small constant (e.g., 5%). As mentioned below, these assumptions hold widely in practice across many different classifiers. Moreover, these assumptions are nontrivial and do not “assume away the difficulty”. Indeed, there are many natural examples of training algorithms for which these assumptions hold but the generalization gap is large. Last, making some assumptions is necessary for a generalization bound to hold for SSS algorithms; see Remark 3.1 and Appendix E. 2. We also give a theoretical justification for the assumption of a small rationality gap, by proving that a positive rationality gap corresponds to “leaving performance on the table”, in the sense that we can transform a learning procedure with a large rationality gap into a procedure with better test performance (Theorem 3.2). Empirical contributions. We complement the theoretical results above with an extensive empirical study of several SSS and end-to-end algorithms on both the CIFAR-10 and ImageNet datasets. 1. We study several top-performing SSS architectures, and show that they all exhibit relatively small generalization gaps on both CIFAR-10 and ImageNet. We stress that we consider the case where the same data is used for both representation learning and classification, and hence it is by no means a-priori obvious that these algorithms should have small generalization gaps. See Figures 1 and 4 for sample results and Section 4 for more details. 2. We also show that the results of Zhang et al. (2017) do not replicate to SSS algorithms, in the sense that such algorithms, despite using an over-parameterized representation, are not able to fit random label noise. 3. We understake an empirical study of the robustness, rationality, and memorization gaps for both SSS and end-to-end supervised learning algorithms. We show that the robustness and rationality gaps are small for all these algorithms, while the memorization gap is small for SSS algorithms but can be large for end-to-end supervised learning. We show that the RRM bound is typically non-vacuous, and in fact, often close to tight, for a variety of SSS algorithms on the CIFAR-10 and ImageNet datasets, including SimCLR (which achieves test errors close to its supervised counterparts). 4. We demonstrate that replacing the memorization gap with the upper bound of Theorem II yields a non-vacuous generalization bound for a variety of SSS algorithms on CIFAR-10 and ImageNet. Moreover, this bound gets tighter with more data augmentation. Related Work. There are many works on generalization bounds for supervised learning (e.g., Golowich et al. (2018); Neyshabur et al. (2017); Bartlett et al. (2017); Dziugaite & Roy (2017); Neyshabur et al. (2018); Cao & Gu (2019), and references therein). The related work section of Arora et al. (2019) contains an extensive discussion of such bounds, and why more often than not the assumptions used do not hold in practice. Indeed, many such bounds give vacuous guarantees for modern architectures (such as the ones considered in this paper) that have the capacity to memorize their entire training set (Zhang et al., 2017). Some non-vacuous bounds are known; e.g., Zhou et al. (2019) gave a 96.5% bound on the error of MobileNet on ImageNet. Belkin et al. (2019); Nagarajan & Kolter (2019) showed some barriers for generalization gaps for standard end-to-end supervised learning. Similarly, standard approaches such as Rademacher complexity cannot directly bound SSS algorithms’ generalization gap(see Remark 3.1). Recently, Saunshi et al. (2019) and Lee et al. (2020) gave generalization bounds for self-supervised based classifiers. The two works considered special cases of SSS algorithms, such as contrastive learning and pre-text tasks. Both works make strong statistical assumptions of (exact or approximate) conditional independence relating the pre-training and classification tasks. For example, if the pre-training task is obtained by splitting a given image x into two pieces (x1, x2) and predicting x2 from x1, then Lee et al. (2020)’s results require x1 and x2 to be approximately independent conditioned on their class y. However, in many realistic cases, the two parts of the same image will share a significant amount of information not explained by the label. Our work applies to general SSS algorithms without such statistical assumptions, at the expense of assuming bounds on the robustness and rationality gaps. There have been works providing rigorous bounds on the robustness gap or related quantities (See Section 3.). However, as far as we know, the rationality gap has not been explicitly defined or studied before. We provide a brief exposition of the various types of SSS methods in Section 4, and a more detailed discussion in Appendix D.1. Paper Organization. Section 2 contains formal definitions and statements of our results. Section 3 provides an overview of prior work and our new results on the three gaps of the RRM bound. In Section 4, we describe our experimental setup and detail our empirical results. Section 5 concludes the paper and discusses important open questions. We defer proofs and additional experimental results to the appendix. Appendix B contains the proof of Theorem II, while Appendix C contains the proof of Theorem 3.2. Appendix D fully details our experimental setup.1 Notation. We use capital letters (e.g., X) for random variables, lower case letters (e.g., x) for a single value, and bold font (e.g., x) for tuples (which will typically have dimension corresponding to the number of samples, denoted by n). We use xi for the i-th element of the tuple x. We use calligraphic letters (e.g., X ,D) for both sets and distributions. 2 FORMAL STATEMENT OF RESULTS A training procedure is a (possibly randomized) algorithm T that takes as input a train set (x,y) = (xi, yi)i∈[n] ∈ (X×Y)n and outputs a classifier f : X → Y . For our current discussion, we make no assumptions on the type of classifier output or the way that it is computed. We denote the distribution over training sets in (X ×Y)n byDtrain and the distribution over test samples in X ×Y byDtest.2 The generalization gap of a training algorithm T with respect to a distribution pair D = (Dtrain,Dtest) is the expected difference between its train accuracy (which we denote by TrainD,T ) and its test performance (which we denote by TestD,T ). We will often drop subscripts such as D, T when they can be inferred from the context. We will also consider the η-noisy experiment, which involves computing the classifier f̃ = T (x, ỹ) where ỹi = yi with probability 1 − η and is uniform over Y otherwise. Our starting point is the following observation which we call the RRM bound (for Robustness, Rationality, and Memorization). The quantities appearing in it are defined in Table 1 and discussed more in depth in Section 3. Fact I (RRM bound). For every noise parameter η > 0, training procedure T and distribution D = (Dtrain,Dtest) over training sets and test samples, the RRM bound with respect to T and D is, 1We provide our code and data in an anonymous repository on: http://github.com/ICLR2021-rep-gen/. 2The train and test data often stem from the same distribution (i.e., Dtrain = Dntest), but not always (e.g., it does not hold if we use data augmentation). Dtest enters the RRM bound only via the rationality gap, so the assumption of small rationality may be affected if Dtrain 6= Dntest, but the RRM bound still holds. Train− Test︸ ︷︷ ︸ Generalization gap ≤ [ Train− Train(η) ] +︸ ︷︷ ︸ Robustnessgap + [ NTrain(η)− Test ] +︸ ︷︷ ︸ Rationality gap + [ Train(η)− NTrain(η) ] +︸ ︷︷ ︸ Memorization gap where we denote x+ = max(x, 0). The RRM bound is but an observation, as it directly follows from the fact that x+ ≥ x for every x. However, it is a very useful one. As mentioned above, for natural algorithms, we expect both the robustness and rationality components of this gap to be small, and hence the most significant component is the memorization gap. Our main theoretical result is a bound on this gap: Theorem II (Memorization gap bound). Let T = (Tpre, Tfit) be an SSS training procedure obtained by first training Tpre on x ∈ Xn to get a representation r : X → R and then training Tfit on (r(x),y) for y ∈ Yn to obtain a classifier g : R → Y , with the final classifier f : X → Y defined as f(x) = g(r(x)). Then, for every noise parameter η > 0 and distribution D over Xn × Yn: Memorization gap(T ) = TrainT,D(η)− NTrainT,D(η) ≤ O( √ Cη(Tfit) n · 1 η ) where Cη(Tfit) is a complexity measure of the second phase training procedure, which in particular is upper bounded by the number of bits required to describe the classifier g (See Definition 2.3.). 2.1 COMPLEXITY MEASURES We now define three complexity measures, all of which can be plugged in as the measure in Theorem II. The first one, Cmdl, is the minimum description length of a classifier in bits. At a first reading, the reader can feel free to skip the description of the other two measures Cpc and Cdc. These are superficially similar to Rademacher Complexity (cf. Bartlett & Mendelson (2002)) in the sense that they capture the ability of the hypothesis to correlate with random noise but crucially depend on the algorithm used rather than the class of concepts (see Remark 3.1). Definition 2.3 (Complexity of training procedures). Let T be a training procedure taking as input a set (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y), we define the following three complexity measures with respect to r,y, η: • The minimum description length of T is defined as Cmdlr,y,η(T ) := H(g) where we consider the model g as a random variable arising in the η-noisy experiment.3 • The prediction complexity of T is defined as Cpcr,y,η(T ) := ∑n i=1 I(g(ri); ỹi) where the ỹi’s are the labels obtained in the η-noisy experiment. • The (unconditional) deviation complexity of T is defined as Cdcr,y,η(T ) := n · I(g(ri) − yi ; ỹi − yi) where the random variables above are taken over i ∼ [n] and subtraction is done modulo |Y|, identifying Y with the set {0, . . . , |Y| − 1}. 3The name “minimum description length” is justified by the operational definition of entropy relating it to the minimum amortized length of a prefix-free encoding of a random variable. Conditioned on y and the choice of the index i, the deviations g(ri)− yi and ỹi − yi determine the predictions g(ri) and noisy labels ỹi, and vice versa. Hence we can think of Cdc as an “averaged” variant of Cpc, where we make the choice of the index i part of the sample space for the random variables. While we expect the two measures to be approximately close, the fact that Cdc takes i into the sample space makes it easier to estimate this quantity in practice without using a large number of executions (See Figure D.2 for convergence rates.). The measure Cmdl is harder to evaluate in practice, as it requires finding the optimal compression scheme for the classifier. Appendix B contains the full proof of Theorem II. It is obtained by showing that: (i) for every r,y, η, and T it holds that Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdlr,y,η(T ), and (ii) for every SSS algorithm T = (Tpre, Tfit) and distribution D = (Dtrain,Dtest), the memorization gap of T is at most√ CdcTpre(x),y,η(Tfit) / ( η √ 2n ) . (1) It is the quantity (1) that we compute in our experiments. 3 THE THREE GAPS We now briefly describe what is known and what we prove about the three components of the RRM bound. We provide some additional discussions in Appendix E, including “counter-examples” of algorithms that exhibit large values for each one of these gaps. The robustness gap. The robustness gap measures the decrease in training accuracy from adding η noisy labels, measured with respect to the clean labels. The robustness gap and related notions such as noise stability or tolerance have been studied in various works (cf. Frénay & Verleysen (2013); Manwani & Sastry (2013)). Interpolating classifiers (with zero train error) satisfy Train(η) ≥ 1 − η and hence their robustness gap is at most η (See left panel of Figure 2). In SSS algorithms, since the representation is learned without using labels, the injection of label noise only affects the simple classifier, which is often linear. Robustness guarantees for linear classifiers have been given previously by Rudin (2005). While proving robustness bounds is not the focus of this paper, we note in the appendix some simple bounds for least-squares minimization of linear classifiers and the (potentially inefficient) Empirical Risk Minimization algorithm (see Appendices F and G). Empirically, we observe that the robustness gap of SSS algorithms is often significantly smaller than η. (See left panels of Figure 2 and Figure 3.) The memorization gap. The memorization gap corresponds to the algorithm’s ability to fit the noise (i.e., the gap increases with the number of fit noisy labels). If, for example, the classifier output is interpolating, i.e., it satisfies f(xi) = ỹi for every i, then accuracy over the noisy samples will be 0 (since for them yi 6= ỹi). In contrast, the overall accuracy will be in expectation at least 1−η which means that the memorization gap will be≈ 1 for small η. However, we show empirically (see right panels of Figures 2 and 3) that the memorization gap is small for many SSS algorithms and prove a bound on it in Theorem II. When combined with small rationality and robustness, this bound results in non-vacuous generalization bounds for various real settings (e.g., 48% for ResNet101 with SimCLRv2 on ImageNet, and as low as 4% for MoCo V2 with ResNet-18 on CIFAR-10). Moreover, unlike other generalization bounds, our bound decreases with data augmentation (See Figure 5.). Remark 3.1 (Memorization vs. Rademacher complexity). The memorization gap, as well the complexity measures defined in Section 2.1 have a superficial similarity to Rademacher complexity (Bartlett & Mendelson, 2002), in the sense that they quantify the ability of the output classifier to fit noise. One difference is that Rademacher complexity is defined with respect to 100% noise, while we consider the η-noisy experiment for small η. A more fundamental difference is that Rademacher complexity is defined via a supremum over all classifiers in some class. The final classifiers of SSS algorithms are obtained by a composition of the complex representation and simple classifier. This composed classifier will in general have high Radamacher complexity, and in particular we would not be able to prove non-vacuous bounds on it using Radamacher complexity. We cannot ignore the complexity of the representation in Radamacher-complexity based analysis of SSS algorithms since the representation is learned using the same data that is later used for classification. In fact, there are examples of SSS algorithms with simple classifiers that have large generalization gaps (see Section 3.1). This shows that Radamacher complexity bounds for the class of simple classifiers cannot, on their own, be used to derive generalization bounds. Zhang et al. (2017) demonstrated a lower bound on the Radamacher complexity of modern deep networks, by showing that modern end-to-end supervised learning algorithm can fit 100% of their label noise. Our experiments show that this is not the case for SSS algorithms, which can only fit 15%-25% of the CIFAR-10 training set when the labels are completely random (See Table D.1 in the appendix.). However, absence of evidence is not evidence of absence, and the fact that empirically SSS algorithms do not fit the noise, does not imply that the Radamacher complexity of the resulting class is small, nor does it, by its own, automatically imply a small generalization gap. 3.1 THE RATIONALITY GAP Unlike the other quantities defined above, the rationality gap is novel and less intuitive, and so we discuss it more in depth. The rationality gap, like all other quantities in the RRM bound, applies to any learning procedure and not only to SSS algorithms. Indeed, our empirical results show that rationality is typically small for both SSS and end-to-end algorithms, and so it is not this gap but rather the memorization gap that accounts for the difference in their generalization behavior. To build intuition for the rationality gap, consider an example of a training procedure T that on input a train set S, has 70% test accuracy and a 10% rationality gap with noise parameter η = 5%. In the η-noisy experiment, the classifier f̃ output by T recovers the original uncorrupted label for 80% of the ≈ η ·n datapoints for which it received the wrong labels. In contrast, 10% rationality gap means the same classifier will only succeed in recovering the label of 70% of unseen test samples. Intuitively, such a classifier is being “irrational” or “inconsistent” in the sense that it succeeds better on datapoints on which it was given the wrong label, then on datapoints on which it was given no label at all. (In error-correcting code parlance, it handles corruption errors better than erasure errors.) We can turn this intuition into a formal argument, by giving a transformation from such a training algorithm T to an algorithm T ′ that achieves roughly 80% test accuracy. On input a fresh unseen datapoint x, the algorithm T ′ chooses a random label ỹ ∼ Y , runs T on the train set S ∪ {(x, ỹ)} to obtain some classifier f̃ , and outputs f̃(x). Up to low-order terms, T ′ will achieve test accuracy at least as good as the performance of T on noisy datapoints, which is 80%. The above reasoning leads to the proof of the following theorem (see also Appendix C): Theorem 3.2 (Performance on the table theorem, informal). For every training procedure T and distribution Dtest, Dtrain = Dntest, there exists a training procedure T ′ satisfying TestT ′ ≥ TestT + rationality gap(T )− robustness gap(T )− o(1). Why do natural algorithms have a small rationality gap? Empirically, the rationality gap is often small or zero for both SSS and end-to-end supervised learning algorithms, particularly for betterperforming ones. (See middle panels of Figure 2 and Figure 3.) Theorem 3.2 provides an “economic explanation” for this phenomenon: a rational agent would not use a classifier with a positive rationality gap since this amounts to “leaving performance on the table”. However, this transformation comes at a high computational cost; inference for the classifier produced by T ′ is as expensive as retraining from scratch. Hence Theorem 3.2 does not fully explain why natural algorithms tend to have small rationality gap. In this paper we take low rationality gap as an empirically-justifiable assumption. We believe that both proving that natural algorithms have small rationality gaps, as well as coming up with computationally efficient transformations to extract performance from rationality gaps, are important open questions. Does assuming small rationality gap trivialize generalization? Since the definition of the rationality gap involves the test accuracy, the reader might wonder if assuming small rationality is not tantamount to assuming a small generalization gap. However, there is nothing “irrational” about a large generalization gap, and indeed many excellent classifiers have 100% train accuracy. In contrast, it is irrational to “leave performance on the table” and use a classifier with test accuracy pwhen it can be transformed into one with significantly better accuracy. Concretely, our empirical studies show that the rationality gap is uniformly small, even for end-to-end classifiers that have large generalization gaps. Hence, by itself, rationality is not enough to guarantee small generalization gap. Is assuming small rationality gap even needed? Since SSS algorithms use simple classifiers, the reader may wonder why we need the small-rationality gap assumption and cannot directly prove generalization bounds using standard tools such as Rademacher complexity. The issue is that the representation used by SSS algorithms is still sufficiently over-parameterized to allow memorizing the training set. As a pedagogical example, consider a representation-learning procedure that maps a label-free training set x to a representation r : X → R under which the differently labeled x’s are linearly separable. Moreover, suppose that the representation space has dimension much smaller than n, and hence a linear classifier would have small complexity under any reasonable measure. Without access to the labels, we can transform r to a representation r′ that on input x outputs r(x) if x is in the training set, and outputs the all-zero vector (or another trivial value) otherwise. Given sufficiently many parameters, the representation r′ (or a close-enough approximation) can be implemented by a neural network. Since r and r′ are identical on the training set, a learning procedure using r′ will have the same train accuracy and (small) memorization gap. However, the generalization gap of such a procedure will be large, since it will not achieve better than trivial accuracy on unseen test examples. The issue here is not that the representation “memorizes” the train set. Representations of practical SSS algorithms are highly over-parameterized and are quite likely to memorize specific aspects of the training set. Rather, the issue is that the representation artificially behaves differently on test points in a way that decreases its performance. It is the latter property that makes the classifier “irrational”, and violates the small rationality gap assumption. 4 EMPIRICAL STUDY OF THE RRM BOUND In support of our theoretical results, we conduct an extensive empirical study of the three gaps and empirically evaluate the bound from Equation (1) for a variety of SSS algorithms for the CIFAR10 and ImageNet datasets. We provide a summary of our setup and findings below. For a full description of the algorithms and hyperparameters, see Appendix D. SSS Algorithms (Tpre, Tfit). We consider various self-supervised training algorithms that learn a representation without explicit training labels. In our study, we include methods based on contrastive learning such as Instance Discrimination (Wu et al., 2018), MoCoV2 (He et al., 2020), SimCLR (Chen et al., 2020a;b), AMDIM (Bachman et al., 2019), CMC (Tian et al., 2019), InfoMin (Tian et al., 2020) as well as adversarial methods such as BigBiGAN (Donahue & Simonyan, 2019). For the second phase of training (also known as the evaluation phase (Goyal et al., 2019)), we consider simple models such as regularized linear regression, or small Multi-Layer Perceptrons (MLPs). For each evaluation method, we run two experiments: 1) the clean experiment where we train Tfit on the data and labels (x,y); 2) the η-noisy experiment where we train Tfit on (x, ỹ) where ỹ are the η noised labels. Unless specified otherwise we set the noise to η = 5%. Adding augmentations. We investigate the effect of data augmentation on the three gaps and the theoretical bound. For each training point, we sample t random augmentations (t = 10 unless stated otherwise) and add it to the train set. Note that in the noisy experiment two augmented samples of the same original point might be assigned with different labels. We use the same augmentation used in the corresponding self-supervised training phase. Results. Figures 1 and 2 provide a summary of our experimental results for CIFAR-10. The robustness and rationality gaps are close to zero for most SSS algorithms, while the memorization gap is usually the dominant term, especially so for models with larger generalization gap. Moreover, we see that Cdc often produces a reasonably tight bound for the memorization gap, leading to a generalization bound that can be as low as 5-10%. In Figures 3 and 4 we give a summary of our experimental results for SSS algorithms on ImageNet. Again, the rationality and robustness gaps are bounded by small constants. Notice, that adding augmentations reduces memorization, but may lead to an increase in the rationality gap. This is also demonstrated in Figure 5 where we vary the number of data augmentations systematically for one SSS algorithm (AMDIM) on CIFAR-10. Since computing the Theorem II bound for ImageNet is computationally expensive (See Appendix D.5.1.) we compute it only for two algorithms, which achieve a non-vacuous generalization bound of 48%. 5 CONCLUSIONS AND OPEN QUESTIONS This work demonstrates that SSS algorithms have small generalization gaps. While our focus is on the memorization gap, our work motivates more investigation of both the robustness and rationality gaps. In particular, we are not aware of any rigorous bounds for the rationality gap of SSS algorithms, but we view our “performance on the table” theorem (Theorem 3.2) as a strong indication that it is close to zero for natural algorithms. Given our empirical studies, we believe the assumptions of small robustness and rationality conform well to practice. Our numerical bounds are still far from tight, especially for ImageNet, where evaluating the bound (more so with augmentations) is computationally expensive. Nevertheless, we find it striking that already in this initial work, we get non-vacuous (and sometimes quite good) bounds. Furthermore, the fact that the empirical RRM bound is often close to the generalization gap, shows that there is significant room for improvement. Overall, this work can be viewed as additional evidence for the advantages of SSS algorithms over end-to-end supervised learning. Moreover, some (very preliminary) evidence shows that end-toend supervised learning implicitly separates into a representation learning and classification phases (Morcos et al., 2018). Understanding the extent that supervised learning algorithms implicitly perform SSS learning is an important research direction in its own right. To the extent this holds, our work might shed light on such algorithms’ generalization performance as well. 6 ACKNOWLEDGEMENTS We thank Dimitris Kalimeris, Preetum Nakkiran, and Eran Malach for comments on early drafts of this work. This work supported in part by NSF award CCF 1565264, IIS 1409097, DARPA grant W911NF2010021, and a Simons Investigator Fellowship. We also thank Oracle and Microsoft for grants used for computational resources. Y.B is partially supported by MIT-IBM Watson AI Lab. Work partially performed while G.K. was an intern at Google Research. A MUTUAL INFORMATION FACTS Lemma A.1 . If A,B are two Bernoulli random variables with nonzero expectation then |E[A|B = 1]− E[A]| ≤ √ 1 2I(A;B)/E[B] Proof. A standard relation between mutual information and KL-divergence gives I(A;B) = DKL(pA,B ||pApB). On the other hand, by the Pinsker inequality, sup S⊆{0,1}×{0,1} |pA,B(S)− pA×B(S)| ≤ √ 1 2 DKL(pA,B ||pApB) = √ 1 2 I(A,B). Thus (letting S = {(1, 1)}), |Pr[A = 1, B = 1]−Pr[A = 1]Pr[B = 1]| ≤ √ 1 2I(A,B). Consequently, |E[A|B = 1]− E[A]| ≤ √ 1 2I(A,B))/E(B) Lemma A.2 . For three random variables W,X, Y , s.t. X and Y are independent, I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ) Proof. Using the chain rule for mutual information we have: I(W ;X,Y ) = I(W ;X) + I(W ;Y |X) Since X,Y are independent, H(Y |X) = H(Y ) and since conditioning only reduces entropy, we have H(Y |W,X) ≤ H(Y |W ). Combining the two we get, I(W ;Y |X) = H(Y |X)−H(Y |W,X) ≥ H(Y )−H(Y |W ) = I(W ;Y ) Thus we have that I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ). Note that by induction we can extend this argument to show that I(W ;X1, ..., Xn) ≥ ∑ I(W ;Xi) where Xi are mutually independent. B SIMPLE CLASSIFIERS IMPLY SMALL MEMORIZATION GAP In this appendix we we prove our main theoretical result (Theorem B.4). We will start by giving a formal definition of SSS algorithms and restating the definition of our complexity measures. Definition B.1 (SSS Algorithms, restated). An SSS algorithm over (X × Y)n is a procedure T = (Tpre, Tfit) that takes as input a set (x,y) and operates as follows: 1. Tpre takes the (label free) data points x as input and outputs a representation r : X → R for some setR; 2. On input the points {(r(xi), yi)}ni=1, Tfit outputs a simple classifier g : R :→ Y; 3. The output is a classifier f : X → Y defined as f(x) = g(r(x)) for every x ∈ X . We now restate the definitions of our complexity measure. Definition B.2 (Complexity of training procedures, restated). Let T be a training procedure taking as input (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y): • The minimum description length of T with respect to r,y, η is defined as Cmdlr,y,η(T ) = H(g) where g is the random variable T (r, ỹ) in the η noisy experiment. • The prediction complexity of T with respect to r,y, η is defined as, Cpcr,y,η(T ) := n∑ i=1 I(g(ri); ỹi) where g(ri) and ỹi are viewed as random variables over the sample space induced by choosing ỹ according to the η-noisy experiment w.r.t. y and letting g = T (x, ỹ). • The deviation complexity of T with respect to r,y, η is defined as Cdcr,y,η(T ) := n · I(∆;N) where ∆ = g(ri) − yi (mod |Y|) and N = ỹi − yi (mod |Y|) are random variables taken over both the above sample space and the choice of i ∼ [n] and identifying Y with {0, . . . , |Y| − 1}. The following theorem shows that Cdc is upper bounded by Cpc, which in turn is bounded by the operational entropy of g. Theorem B.3 (Relation of complexity measures). For every r,y, η > 0, and T Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdl(T ) where g is the classifier output by T (considered as a random variable). Proof. Fix T, r,y, η. We get ỹ by choosing i.i.d random variables N1, . . . , Nn, each equalling 0 with probability 1− η and uniform otherwise, and letting ỹi = yi +Ni (mod |Y|). We start by proving the second inequality Cpcr,y,η(T ) ≤ H(g). Let g = T (r, ỹ) and define p = (g(r1), . . . , g(rn)) be the vector of predictions. Then, Cpcr,y,η(T ) = ∑ i I(pi; ỹi) = ∑ i I(pi;Ni) (2) with the last equality holding since for fixed yi, Ni determines ỹi and vice versa. However, since the full vector p contains only more information than pi, the right-hand side of (2) is at most∑n i=1 I(p;Ni) ≤ I(p ; N1, . . . , Nn), using the fact that Ni random variables are independent (see Lemma A.2). For a fixed r, the value of p is completely determined by g and hence the entropy of p is at most H(g), establishing the second inequality of the theorem. We now turn to the first inequality Cdcr,y,η(T ) ≤ Cpcr,y,η(T ). Let ∆i = pi − yi (mod |Y|). Then, 1 nC pc r,y,η(T ) = E j∼[n] I(pj ;Nj) = E j∼[n] I(∆j ;Nj) (3) since pi determines ∆i and vice versa. But, since Nj = N |i = j and ∆j = ∆|i = j (where N,∆ are the random variables defined in Definition B.2), the right-hand side of (3) equals E j∼[n] I(∆;N |i = j) = E j∼[n] H(N |i = j)−H(N |∆, i = j) . (4) Since N1, . . . , Nn are identically distributed, H(N |i = j) = H(N) which means that the righthand side of (4) equals H(N)− E j∼[n] H(N |∆, i = j) ≥ H(N)−H(N |∆) = I(∆;N) with the inequality holding since on average conditioning reduces entropy. By definition I(∆;N) = 1 nC dc r,y,η(T ), establishing what we wanted to prove. The complexity measures Cpc and Cdc are defined with respect to a fixed train set (r,y), rendering them applicable for single training sets such as CIFAR-10 and ImageNet that arise in practice. If D is a distribution over (r,y), then we define the complexity measures Cpc and Cdc with respect toD as the average of the corresponding measure with respect to (r,y) ∼ D. We now restate Theorem II: Theorem B.4 (Theorem II, restated). Let T = (Tpre, Tfit) be a training procedure obtained by first training Tpre on x ∈ Xn to obtain a representation r : X → R and then training Tfit on (r(x),y)) where y ∈ Yn to obtain a classifier g : R → Y . Then, for every noise parameter η > 0 and distribution Dtrain over (X ,Y)n, Memorization gap(T ) = TrainDtrain,T (η)− NTrainDtrain,T (η) ≤ √ CdcDr,η(Tfit) 2n · 1 η where Dr is the distribution over (R×Y)n induced by Tpre on Dtrain. Note that the bound on the right-hand side is expressed only in terms of the complexity of the second stage Tfit and is independent of the complexity of Tpre. The crux of the proof is showing (close to) independence between the corrupted indices and prediction deviation of g resulting from the noise. Proof. Let (r,y) be sampled by first drawing (x,y) ∼ Dtrain over (X×Y)n then applying r = r(x) where r = Tpre(x). Consider the sample space of sampling ỹ according to the η-noisy distribution with respect to Y , computing g = Tfit(r, ỹ), and sampling i ∼ [n]. We define the following two Bernoulli random variables over this sample space: Z = 1∆=0 = { 1 g(Ri) = yi 0 otherwise ; B = 1N 6=0 = { 1 ỹi 6= yi 0 otherwise . For a given r,y, since Z is determined by ∆ and B is determined by N , I(Z;B) ≤ I(∆;N) = Cdcr,y,η(Tfit)/n. By Lemma A.1, for every Bernoulli random variables B,Z |E[Z]− E[Z|B = 1]| ≤ √ 1 2I(Z;B)/E[B] And hence in our case (since E[B] = η), E[Z]− E[Z|B = 1] ≤ √ Cdcr,y,η(Tfit) 2n · 1 η . But E[Z] corresponds to the probability that g(r) = y for (r, y) in the train set, while E[Z|B = 1] corresponds to this probability over the noisy samples. Hence the memorization gap is bounded by E (r,y)∼Dr [√ Cdcr,y,η(Tfit) 2n · 1 η ] ≤ 1η √ E (r,y)∼Dr [ Cdcr,y,η(Tfit) 2n ] = √ CdcR,η(Tfit) 2n · 1 η using the Jensen inequality and the concavity of square root for the first inequality. C POSITIVE RATIONALITY GAP LEAVES ROOM FOR IMPROVEMENT In this appendix, we prove the “performance on the table theorem” that states that we can always transform a robust training procedure with a positive rationality gap into a training procedure with better performance: Theorem C.1 (Performance on the table theorem, restated). For every training procedure T and Dtest, n, η, if Dtrain = Dntest there exists a training procedure S such that TestS,D,n ≥ NTrainT,D,n(η)− o(1) (5) where o(1) is a term that vanishes with n, and under the assumption that TrainT,D,n(η) ≥ NTrainT,D,n(η). For any reasonable training procedure T , performance on noisy train samples will not be better than the overall train accuracy, and hence the assumption will be satisfied. In particular (since we can always add noise to our data), the above means that we can obtain a procedure S′ whose clean test performance is at least TestT + ∆ where ∆ = NTrainT (η)− TestT is the rationality gap of T . Hence if the rationality gap is larger than the robustness gap, we can use the above to improve the test performance of “irrational” networks. (Note that the robustness gap of almost all standard training procedure is at most η and in fact often much smaller.) We stress that the procedure of Theorem 3.2, while running in “polynomial time”, is not particularly practical, since it makes inference be as computationally expensive as training. However, it is a proof of concept that irrational networks are, to some extent, “leaving performance on the table”. Proof. Let T be a procedure as above. Our algorithm S would be the following: • Training: The algorithm will not do any training but on input labelsD = {(xi, ỹi)} simply stores these labels. • Inference: On input a data point x, Algorithm S will choose i ∈ [n] at random, and run T on the data D replacing the i-th sample with (x, ỹ) where ỹ is chosen uniformly at random. The output is f(x) where f is the classifier output by T First note that while the number of noisy samples could change by one by replacing (xi, yi) with (x, ỹ), since this number is distributed according to the Binomial distribution with mean ηn and standard deviation √ (1− η)ηn 1, this change can affect probabilities by at most o(1) additive factor. If Y has k classes, then with probability 1− 1/k we will make (x, ỹ) noisy (y 6= ỹ) in which case the expected performance on it will be NTrainT (η). With probability 1/k, we choose the correct label y in which case performance on this sample will be equal to the expected performance on clean samples which by our assumptions is at least NTrainT (η) as well. D EXPERIMENTAL DETAILS We perform an empirical study of the RRM bound for a wide variety of self-supervised training methods on the ImageNet (Deng et al., 2009) and CIFAR-10 (Krizhevsky et al., 2009) training datasets. We provide a brief description of all the self-supervised training methods that appear in our results below. For each method, we use the official pre-trained models on ImageNet wherever available. Since very few methods provide pre-trained models for CIFAR-10, we train models from scratch. The architectures and other training hyper-parameters are summarized in Table H.4 and Table H.3. Since our primary aim is to study the RRM bound, we do not optimize for reaching the state-of-the-art performance in our re-implementations. For the second phase of training, we use L2-regularized linear regression, or small non-interpolating Multi-layer perceptrons (MLPs). D.1 SELF-SUPERVISED TRAINING METHODS (TPRE) There are a variety of self-supervised training methods for learning representations without explicit labels. The two chief classes of self-supervised learning methods are: 1. Contrastive learning: These methods seek to find an embedding of the dataset that pushes a positive pair of images close together and a pair of negative images far from each other. For example, two different augmented versions of the same image may be considered a positive pair, while two different images may be considered a negative pair. Different methods such as Instance Discrimination, MoCo, SimCLR, AMDIM, differ in the the way they select the positive/negative pairs, as well other details like the use of a memory bank or the encoder architecture. (See Falcon & Cho (2020) for detailed comparison of these methods). 2. Handcrafted pretext tasks: These methods learn a representation by designing a fairly general supervised task, and utilizing the penultimate or other intermediate layers of this network as the representation. Pretext tasks include a variety of methods such as predicting the rotation angle of an input image (Gidaris et al., 2018), solving jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016), denoising images (Vincent et al., 2008) or image inpainting (Pathak et al., 2016). Additionally, adversarial image generation can be used for by augmenting a the image generator with an encoder (Donahue & Simonyan, 2019). We focus primarily on contrastive learning methods since they achieve state-of-the-art performance. We now describe these methods briefly. Instance Discrimination: (Wu et al., 2018) In essence, Instance Discrimination performs supervised learning with each training sample as a separate class. They minimize the non-parametric softmax loss given below for the training dataset. J(θ) = − n∑ i=1 log ( exp(vTi v/τ)∑n j=1 exp(v T i v/τ) ) (6) where vi = fθ(xi) is the feature vector for the i-th example. They use memory banks and a contrastive loss (also known as Noise Contrastive Estimation or NCE (Gutmann & Hyvärinen, 2010)) for computing this loss efficiently for large datasets. So in this case, a positive pair is an image and itself, while a negative pair is two different training images. Momentum Contrastive (MoCo): (He et al., 2020) MoCo replaces the memory bank in Instance Discrimination with a momentum-based query encoder. MoCoV2 (Chen et al., 2020c) uses various modifications from SimCLR, like a projection head, and combines it with the MoCo framework for improved performance. AMDIM: (Bachman et al., 2019) AMDIM uses two augmented versions of the same image. For these augmentations, they use random resized crops, random jitters in color space, random horizontal flip and random conversion to grayscale. They apply the NCE loss across multiple scales, by using features from multiple layers. They use a modified ResNet by changing the receptive fields to decrease overlap between positive pairs. CMC: (Tian et al., 2019) CMC creates two views for contrastive learning by converting each image into the Lab color space. L and ab channels from the same image are considered to be a positive pair, while those from two different images are considered to be a negative pair. PiRL: (Misra & Maaten, 2020) PiRL first creates a jigsaw transformation of an image (it divides an image into 9 patches and shuffles these patches). It treats an image and its jigsaw as a positive pair, and that of a different image as a negative pair. They additionally modify encoder on the jigsaw branch. SimCLRv1 and SimCLRv2: (Chen et al., 2020a;b) SimCLR also use strong augmentations to create positive and negative pairs. They use random resized crops, random Gaussian blur and random jitters in color space. Crucially, they use a projection head that maps the representations to a 128- dimensional space where they apply the contrastive loss. They do not use a memory bank, but use a large batch size. InfoMin: InfoMin uses random resized crop, color jitter and gaussian blur, as well as jigsaw shuffling from PiRL. D.2 SIMPLE CLASSIFIER (TFIT) After training the representation learning method, we extract representations r for the training and test images. We do not add random augmentations to the training images (unless stated otherwise). Then, we train a simple classifier on the dataset {r(xi), yi}ni=1. We use a linear classifier in most cases, but we also try a small multi-layer perceptron (as long as it has few parameters and does not interpolate the training data). We add weight decay in some methods to achieve good test accuracy (See Table H.4 for values for each method.) For the noisy experiment, we set the noise level to η = 5%. To compute the complexity bound Cdc we run 20 trials of the noisy experiment for CIFAR10 and 50 trials for ImageNet. D.3 EXPERIMENTAL DETAILS FOR EACH PLOT Figure 1. This figure shows the robustness, rationality and memorization gap for various SSS algorithms trained on CIFAR-10. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.3. For the second phase Tfit, we use L2regularized linear regression for all the methods. For each algorithm listed in Table H.3, the figure contains 2 points, one without augmentations, and one with augmentations. Further, we compute the complexity measure Cdc for all the methods. All the values (along with the test accuracy) are listed in Table H.1. Figure 2. This figure shows the robustness, rationality and memorization for CIFAR-10 for all the same methods as in Figure 1. We only include the points without augmentation to show how rationality behaves when (Dtrain,Dtest) are identical. All the values (along with the test accuracy) are listed in Table H.1. For the supervised architectures, we train a Myrtle-5 (Page, 2018) convolutional network, a ResNet-18 (He et al., 2016) and a WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with standard training hyperparameters. Figure 3 and Figure 4. These figures show the robustness, rationality and memorization for the ImageNet dataset. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.4. For the second phase Tfit, we use L2-regularized linear regression for all the methods. The figures also contain some points with 10 augmentations per training image. Further, we compute the complexity measure Cdc for all three methods - SimCLRv2 with architectures ResNet-50-1x and ResNet-101-2x. All the values (along with the test accuracy) are listed in Table H.2. Figure 5 This figure shows the effect of increasing augmentations. We add t = {2, ..., 10} augmentations and re-train the simple classifier. We do this for the CIFAR-10 dataset, AMDIM selfsupervised training with the AMDIM encoder and linear regression (See Table H.3 for the hyperparameters). D.4 ADDITIONAL RESULTS D.4.1 GENERALIZATION ERROR OF SSS ALGORITHMS To show that SSS algorithms have qualitatively different generalization behavior compared to standard end-to-end supervised methods, we repeat the experiment from Zhang et al. (2017). We randomize all the training labels in the CIFAR-10 dataset and train 3 high-performing SSS methods on these noisy labels. For results see Table D.1. Unlike fully supervised methods, SSS algorithms do not achieve 100% training accuracy on the dataset with noisy labels. In fact, their training accuracies are fairly low (≈ 15-25%). This suggests that the empirical Rademacher complexity is bounded. The algorithms were trained without any augmentations during the simple fitting phase for both SSS and supervised algorithms. The SSS methods were trained using parameters described in Table H.3. D.5 RRM BOUND WITH VARYING NOISE PARAMETER We now investigate the effect of varying noise levels on the three gaps as well as on the complexity. We see that the robustness gap increases as we add more noise—this is expected as noise should affect the clean training accuracy. We also observe that the memorization gap decreases, suggesting that Cdcη as a function of η goes down faster than η 2 (see Appendix B). The Theorem II bound on memorization gap also decays strongly with the η, becoming more tight as the noise increases. D.5.1 CONVERGENCE OF COMPLEXITY MEASURES We now plot the complexity measures Cdc and Cpc with increasing number of trials for one of the SSS algorithms. As expected, Cdc < Cpc and Cdc converges in about 20 trials for CIFAR-10. On the other hand, the complexity computations for ImageNet need many more trials for convergence, since it contains about 10 augmentations×1.2 million training samples making it cost prohibitive to compute for all the methods. For the CIFAR-10, we use AMDIM with the AMDIM encoder architecture without augmentations. For ImageNet, we use SimCLRv2 with the ResNet-101 architecture with 10 augmentations per training sample. E EXAMPLES OF ALGORITHMS WITH LARGE GAPS While we argued that SSS algorithms will tend to have small robustness, rationality, and memorization gaps, this does not hold in the worst case and there are examples of such algorithms that exhibit large gaps in each of those cases. E.1 LARGE ROBUSTNESS GAP Large robustness gap can only arise via computational (as opposed to statistical) considerations. That is, if a training procedure outputs a classifier f ∈ F that achieves on average accuracy α on a clean train set (X,Y ), then with high probability, if (X, Ỹ ) is an η-noisy train set then there exists f ∈ F that achieves α(1− η) accuracy on this train set (by fitting only the “clean” points). However, the training algorithm might not always be able to find such a classifier. For example, if the distribution has the form (x, y) = (x, ∑ ajxj mod 2) where x ∼ GF (2)` = Z`2 and a ∈ GF (2)` is some hidden vector, then there is an efficient algorithm (namely Gaussian elimination) to find a given the samples (x, y) and hence get accuracy 1. However, for every ε > 0 and η > 0, there is no known efficient algorithm that, given a 1− η perturbed equations of the form {〈a, xi〉 = ỹi}i∈[n] finds a′ ∈ GF (2)` such that ∑ a′jxj = ∑ ajxj mod 2 on a 1/2 + ε fraction of the x’s. This is known as the learning parity with noise (LPN) problem (Blum et al., 1993). The assumption of robustness is necessary for a small generalization gap, in the sense that we can come up with (contrived) examples of algorithms that have small rationality and memorization gaps while still having large generalization gap. For example, consider an algorithm T that has large generalization gap (high train accuracy and small test accuracy) , and suppose we augment to the following algorithm T ′(x,y) = { T (x,y) if y is “clean” 0 if y is “noisy” where 0 denotes the constant zero function (e.g., some trivial classifier) and we use some algorithm to estimate whether or not the labels are noisy. (Such estimates can often be achieved in many natural cases.) The algorithm T ′ will inherit the generalization gap of T , since that depends only on the experiment without noise. Since performance on noisy and clean training samples will be the same (close to random), will have zero memorization gap. Since we have assumed small test accuracy, it will have zero rationality gap also. E.2 LARGE RATIONALITY GAP As discussed in Section C, in the case that Dtrain = Dntest, a robust algorithm with large rationality gap leaves “performance on the table”. We can obtain such algorithms by artificially dropping performance on the test data. For example, in the SSS framework, since the representation r is over-parameterized and can memorize the entire train set, we can consider the trivial representation r(x) = { x x in train set 0 otherwise If we now train some simple classifier on r(x) then it can have non-trivial performance on the noisy train samples, while getting trivial accuracy on all samples outside the train set. In cases where Dtrain and Dtest are different (for example when Dtrain is an augmented version of Dtest) then we can no longer claim that a large rationality gap corresponds to “leaving performance on the table”. For example, we do observe (mild) growth in the rationality gap as we add more augmented points to the training set. E.3 LARGE MEMORIZATION GAP It is not hard to find examples of networks with large memorization gap. Indeed, as mentioned before, any standard interpolating supervised learning algorithm will get a memorization gap close to 1. F ROBUSTNESS OF LEAST SQUARES CLASSIFIERS One can prove robustness for classes of algorithms under varying assumptions. As a simple example, we record here a self-contained observation of how margin leads to robustness in least squares minimization. (We believe that this bound is folklore, but weren’t able to find the right reference.) This is a very simple but also pessimistic bound, and much better ones often hold. Lemma F.1 . Let x1, . . . , xn ∈ Rd and y1, . . . , yn ∈ [k], and consider a linear function f : Rd → Rk that minimizes the quantity ∑ i∈[n],j∈[k] |f(xi)j−1yi=j |2, and suppose that for p fraction of the i’s, the maximum over j ∈ [k] of f(xi) is γ larger than the second-largest value. Then in expectation, if we let ỹ be the η-noisy version of y and f̃ minimizes ∑ i∈[n],j∈[k] |f̃(xi)j − 1ỹi=j |2, we get that arg maxj f̃(xi) = yi for at least p− 4η/γ2 fraction of the i’s. Proof. We identify y with its “one hot” encoding as a vector in Rnk. Let V ⊆ Rnk be the subspace of all vectors of the form (g(x1), . . . , g(xn)) for linear g : Rd → Rk. If f is the minimizer in the theorem statement, and p = (f(x1), . . . , f(xn)) then p = ΠV y where ΠV is the orthogonal projection to the subspace v. If f̃ is the minimizer for the noisy labels and p̃ = (f̃(x1), . . . , f̃(xn)), then p̃ = ΠV ỹ = ΠV (y + e) where e is the noise vector ỹ − y. Hence ‖p − p̃‖ = ‖ΠV e‖ ≤ ‖e‖. But in expectation ‖e‖2 ≤ 2ηn (since we flip a label with probability ≤ η). For every point i for which the margin was at least γ in p, if p̃’s prediction is different in i, then the contribution of the i-th block to their square norm difference is at least γ2/2 (by shifting the maximum coordinate by −γ/2 and the second largest one by γ/2). Hence at most 4ηn/γ2 of these points could have different predictions in p and p̃ G ROBUSTNESS OF EMPIRICAL RISK MINIMIZER The (potentially inefficient) algorithm that minimizes the classification errors is always robust. Lemma G.1 . Let T (x,y) = arg minf∈F ∑n i=1 1f(xi)6=yi . Then for every η > 0, Robustness gap(T ) ≤ 2η . Proof. Let x,y be any train set, and let α = ming∈F ∑n i=1 1g(xi)6=yi and f be the minimizer of this quantity. Let ỹ be the η-noisy version of y and let η̃ be the fraction of i on which yi 6= ỹi. Then, n∑ i=1 1f(xi)6=yi ≤ α+ η̃ . (7) Hence if f̃ is the minimizer of (7) then we know that f̃(xi) 6= ỹi for at most α + η̃ fraction of the i’s, and so f̃(xi) 6= yi for at most α + 2η̃ fraction of the i’s. Since the train accuracy of T is 1− α and in expectation of η̃ is η, we get that in expectation TrainT (η) ≥ TrainT − 2η H LARGE TABLES Table H.1 – Summary of all the methods, architectures and the corresponding results (gaps and accuracies) on CIFAR-10, sorted by generalization gap. While Figure 1 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc mocov2 resnet18 True -7.35 0.07 0.21 0.00 3.47 0.28 67.19 mocov2 wide resnet50 2 True -6.37 0.18 1.03 0.00 7.63 1.21 70.99 mocov2 resnet101 True -6.01 0.15 0.71 0.00 6.38 0.86 68.58 mocov2 resnet50 True -5.38 0.19 0.84 0.00 6.99 1.03 69.68 simclr resnet50 True -2.89 0.30 0.55 0.00 6.63 0.85 91.96 amdim resnet101 True -0.91 0.64 3.70 0.00 25.99 4.34 63.56 amdim resnet18 True 0.33 0.23 1.15 0.00 8.66 1.38 62.84 mocov2 resnet18 False 1.43 0.15 1.24 0.03 14.14 1.43 67.60 simclr resnet18 False 1.43 0.28 0.79 0.36 13.35 1.43 82.50 amdim wide resnet50 2 True 1.60 0.69 2.46 0.00 19.20 3.15 64.38 simclr resnet50 False 1.97 0.22 0.78 0.97 15.75 1.97 92.00 simclr resnet50 False 2.24 0.52 1.71 0.01 19.53 2.24 84.94 mocov2 resnet50 False 2.72 0.30 2.96 0.00 24.18 3.26 70.09 mocov2 resnet101 False 2.82 0.33 3.03 0.00 22.78 3.36 69.08 mocov2 wide resnet50 2 False 3.11 0.38 2.79 0.00 22.39 3.18 70.84 amdim resnet50 bn True 3.69 0.84 4.22 0.00 31.12 5.06 66.44 amdim resnet18 False 4.34 0.42 4.58 0.00 33.47 5.00 62.28 amdim amdim encoder True 4.43 0.68 0.36 3.39 10.32 4.43 87.33 amdim amdim encoder False 6.68 2.08 5.69 0.00 70.52 7.77 87.38 amdim resnet101 False 12.46 1.22 14.26 0.00 100.00 15.49 62.43 amdim wide resnet50 2 False 13.07 1.70 15.33 0.00 100.00 17.03 63.80 amdim resnet50 bn False 14.73 1.81 16.63 0.00 100.00 18.43 66.28 Table H.2 – Summary of all the methods, architectures their corresponding results (gaps and accuracies) on ImageNet, sorted by generalization gap. While Figure 4 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc simclrv2 r50 1x sk0 True -2.34 0.26 0.68 0.00 46.93 0.94 70.96 simclrv2 r101 2x sk0 True 0.63 0.10 0.80 0.00 47.90 0.91 77.24 simclrv2 r152 2x sk0 True 1.00 0.13 0.77 0.10 NA 1.00 77.65 moco ResNet-50 True 1.32 0.57 0.93 0.00 NA 1.49 70.15 InfoMin ResNet-50 True 4.88 0.81 1.01 3.06 NA 4.88 72.29 PiRL ResNet-50 True 6.23 0.29 0.99 4.95 NA 6.23 60.56 InsDis ResNet-50 True 6.85 0.25 1.13 5.46 NA 6.85 58.30 simclrv2 r101 1x sk1 False 8.23 0.71 4.66 2.86 NA 8.23 76.07 InfoMin ResNet-50 False 10.21 2.34 8.96 0.00 NA 11.31 70.31 simclrv2 r152 1x sk0 False 10.32 1.12 6.93 2.26 NA 10.32 74.17 simclrv2 r101 1x sk0 False 10.53 1.11 6.99 2.42 NA 10.53 73.04 simclrv2 r50 1x sk0 False 10.62 0.99 7.31 2.31 NA 10.62 70.69 moco ResNet-50 False 10.72 1.82 7.86 1.04 NA 10.72 68.39 simclrv2 r152 2x sk0 False 10.92 0.75 7.45 2.72 NA 10.92 77.25 simclrv2 r101 2x sk0 False 11.02 0.74 7.51 2.78 NA 11.02 76.72 simclr ResNet50 1x False 11.07 1.22 7.73 2.13 NA 11.07 68.73 simclrv2 ResNet-50 False 11.16 0.64 7.67 2.85 NA 11.16 74.99 PiRL ResNet-50 False 11.43 1.49 8.26 1.68 NA 11.43 59.11 InsDis ResNet-50 False 12.02 1.40 8.52 2.10 NA 12.02 56.67 amdim ResNet-50 False 13.62 0.90 9.72 3.01 NA 13.62 67.69 CMC ResNet-50 False 14.73 2.30 12.30 0.13 NA 14.73 54.60 bigbigan ResNet-50 False 29.60 3.13 25.19 1.27 NA 29.60 50.24 Table H.3 – Summary of training methods with their hyper-parameters for CIFAR-10 Selfsupervised method Backbone Architectures Self-supervised Training Evaluation Simple Phase Optimization AMDIM AMDIM Encoder PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-18 ResNet-50 WideResNet-50 ResNet 101 MoCoV2 ResNet-18 PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-50 WideResNet-50 ResNet 101 SimCLR ResNet-18 Batchsize = 128 Epochs 200 Linear SGD Momentum = 0.9 Constant LR = 0.1 Weight decay 1e-6 ResNet-50 ResNet-50 Batchsize = 512Epochs 600 Table H.4 – Summary of training methods with their hyper-parameters for ImageNet Self-supervised method Backbone Architecture Pre-trained Model Evaluation Optimization Weight Decay Epochs Instance Discrimination ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 MoCo ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 PiRL ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 CMC ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 AMDIM AMDIM Encoder Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-3 40 BigBiGAN ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-5 40 SimCLRv1 ResNet-50 1x Official Linear SGDMomentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-50 4x SimCLRv2 ResNet-50 1x SK0 Official Linear SGD Momentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-101 2x SK0ResNet-152 2x SK0 ResNet-152 3x SK0
1. What is the focus of the paper regarding generalization capability in self-supervised learning? 2. What are the strengths of the proposed approach in understanding generalization error? 3. How does the paper address the data re-use problem in analyzing generalization? 4. Can the authors provide more explanations regarding their choice of conditioning on x and measuring label noise? 5. What are the differences between empirical Rademacher complexity bounds and the approach used in the paper? 6. Are there any limitations to using the proposed method in other settings beyond self-supervised learning? 7. How does the paper's analysis relate to standard supervised learning? 8. What are some presentation comments and suggestions for improving the clarity of the paper?
Review
Review The present paper aims to understand the generalization capability of self-supervised learning algorithms that fine-tune a simple linear classifier to the labels. Analyzing generalization in this case is challenging due to a data re-use problem: the same training data that is used for self-supervised learning is also used to fit the labels. The paper addresses this issue by implicitly conditioning on the training covariates x and then deriving generalization bounds that depend only on (hypothetical) noise to the labels y. The paper show that, empirically, the dominant factor in generalization error is a certain quantity called the "memorization gap", which can also be upper-bounded via theoretical analysis (the theoretical bound seems to be loose by about a factor of 4 compared to the empirical measurement, but is still non-vacuous in many cases). Interestingly, this is not the case for standard supervised learning, likely to the higher-complexity models used to fit the labels; in that case the memorization gap is high, but a different gap (called the "rationality gap") is large in magnitude but negative. Overall, the paper is clearly presented, innovative, and has interesting empirical and theoretical results. It seems like a clear accept to me, with my only uncertainty that I am not completely familiar with the related literature. I am also not sure why the authors could not use the Rademacher complexity---are there theoretical obstacles to using it to upper-bound generalization error in this setting, or is the problem that it is too large? If the latter, then have you considered using your approach in settings other than just the self-supervised setting in order to improve on Rademacher complexity bounds? ==== Framing comments / questions: -I don't like the word rationality, since it has a technical meaning in Bayesian statistics that is not the same as the usage here (i agree they are somewhat similar in flavor, but I think it's confusing to conflate them). -I'm not sure it's correct to say that SS+S is a dominant methodology. In practice we would almost always do full fine-tuning on the self-supervised representation, rather than just the final layer. Still, starting with final layer fine-tuning is a reasonable start for analysis. -It seems an important point of your analysis is that we can condition on x and then just look at label noise for measuring generalization. It seems like empirical Rademacher complexity bounds also condition on x, so is there a fundamental difference here? (I think you try to address this in Remark 3.3 but I didn't understand your point there.) ====== A few presentation comments: -I didn't understand this claim: " An optimal Bayesian procedure would have zero rationality gap, and indeed this gap is typically zero or small in practice." -Drawing lines between the dots (and shading the area under the curve) in Figure 1 is inappropriate, since the different points don't follow a logical linear progression (it's really just a scatter plot). -In Fact I, why do we need to take the max with zero? The result is still true even without the max, I believe. -In Fact I, it would be helpful to comment on the effect of changing eta. Do we expect certain of these quantities to get bigger or smaller in that case? Any heuristic intuition for how to choose the best eta? -Section 2.1 is a bit dense. -I liked Figure 2 a lot.
ICLR
Title For self-supervised learning, Rationality implies generalization, provably Abstract We prove a new upper bound on the generalization gap of classifiers that are obtained by first using self-supervision to learn a representation r of the training data, and then fitting a simple (e.g., linear) classifier g to the labels. Specifically, we show that (under the assumptions described below) the generalization gap of such classifiers tends to zero if C(g) n, where C(g) is an appropriately-defined measure of the simple classifier g’s complexity, and n is the number of training samples. We stress that our bound is independent of the complexity of the representation r. We do not make any structural or conditional-independence assumptions on the representation-learning task, which can use the same training dataset that is later used for classification. Rather, we assume that the training procedure satisfies certain natural noise-robustness (adding small amount of label noise causes small degradation in performance) and rationality (getting the wrong label is not better than getting no label at all) conditions that widely hold across many standard architectures. We also conduct an extensive empirical study of the generalization gap and the quantities used in our assumptions for a variety of self-supervision based algorithms, including SimCLR, AMDIM and BigBiGAN, on the CIFAR-10 and ImageNet datasets. We show that, unlike standard supervised classifiers, these algorithms display small generalization gap, and the bounds we prove on this gap are often non vacuous. 1 INTRODUCTION The current standard approach for classification is “end-to-end supervised learning” where one fits a complex (e.g., a deep neural network) classifier to the given training set (Tan & Le, 2019; He et al., 2016). However, modern classifiers are heavily over parameterized, and as demonstrated by Zhang et al. (2017), can fit 100% of their training set even when given random labels as inputs (in which case test performance is no better than chance). Hence, the training performance of such methods is by itself no indication of their performance on new unseen test points. In this work, we study a different class of supervised learning procedures that have recently attracted significant interest. These classifiers are obtained by: (i) performing pre-training with a selfsupervised task (i.e., without labels) to obtain a complex representation of the data points, and then (ii) fitting a simple (e.g., linear) classifier on the representation and the labels. Such “Self-Supervised + Simple” (SSS for short) algorithms are commonly used in natural language processing tasks (Devlin et al., 2018; Brown et al., 2020), and have recently found uses in other domains as well (Ravanelli et al., 2020; Liu et al., 2019). Compared to standard “end-to-end supervised learning”, SSS algorithms have several practical advantages. In particular, SSS algorithms can incorporate additional unlabeled data, the representation obtained can be useful for multiple downstream tasks, and they can have improved out-of-distribution performance (Hendrycks et al., 2019). Moreover, recent works show that even without additional unlabeled data, SSS algorithms can get close to state-of-art accuracy in several classification tasks (Chen et al., 2020b; He et al., 2020; Misra & Maaten, 2020; ∗Equal contribution. Email: {ybansal, galkaplun}@g.harvard.edu †Email: b@boazbarak.org. Tian et al., 2019). For instance, SimCLRv2 (Chen et al., 2020b) achieves 79.8% top-1 performance on ImageNet with a variant of ResNet-152, on par with the end-to-end supervised accuracy of this architecture at 80.5%. We show that SSS algorithms have another advantage over standard supervised learning—they often have a small generalization gap between their train and test accuracy, and we prove non-vacuous bounds on this gap. We stress that SSS algorithms use over-parameterized models to extract the representation, and reuse the same training data to learn a simple classifier on this representation. Thus, the final classifier they produce has high complexity by most standard measures, and it is by no means apriori evident that their generalization gap will be small. Our bound is obtained by first noting that the generalization gap of every training algorithm is bounded by the sum of three quantities, which we name the Robustness gap, Rationality gap, and Memorization gap (we call this the RRM bound, see Fact I). We now describe these gaps at a high level, deferring the formal definitions to Section 2. All three gaps involve comparison with a setting where we inject label noise by replacing a small fraction η of the labels with random values. The robustness gap corresponds to the amount by which training performance degrades by noise injection. That is, it equals the difference between the standard expected training accuracy (with no label noise) and the expected training accuracy in the noisy setting; in both cases, we measure accuracy with respect to the original (uncorrupted) labels. The robustness gap is nearly always small, and sometimes provably so (see Section 3). The rationality gap corresponds to the difference between performance on the noisy training samples (on which the training algorithm gets the wrong label) and test samples (on which it doesn’t get any label at all), again with respect to uncorrupted labels. An optimal Bayesian procedure would have zero rationality gap, and indeed this gap is typically zero or small in practice. Since it is a nonstandard quantity, We discuss the rationality gap in Section 3.1, and explain assuming it is small is both well-founded and does not trivialize the question of generalization. The memorization gap, which often accounts for the lion’s share of the generalization gap, corresponds to the difference in the noisy experiment between the training accuracy on the entire train set and the training accuracy on the samples that received the wrong label (both measured with respect to uncorrupted labels). The memorization gap can be thought of as quantifying the extent to which the classifier can “memorize” noisy labels, or act differently on the noisy points compared to the overall train set. The memorization gap is large in standard “end-to-end supervised training”. In contrast, our main theoretical result is that for SSS algorithms, the memorization gap is small if the simple classifier has small complexity, independently of the complexity of the representation. As long as the simple classifier is under-parameterized (i.e., its complexity is asymptotically smaller than the sample size), our bound on the memorization gap tends to zero. When combined with small rationality and robustness, we get concrete non-vacuous generalization bounds for various SSS algorithms on the CIFAR-10 and ImageNet datasets (see Figures 1 and 4). In a nutshell, our results are the following: Theoretical contributions. 1. Our main theoretical result (Theorem II) is that the memorization gap of an SSS algorithm is bounded byO( √ C/n) whereC is the complexity of the simple classifier produced in the “simple fit” stage. This bound is oblivious to the complexity of the representation produced in the pre-training and does not make any assumptions on the relationship between the representation learning method and the supervised learning task. One way to interpret this result is that we give a rigorous bound on the generalization gap of SSS algorithms, under the assumptions that the robustness and rationality gaps are bounded by some small constant (e.g., 5%). As mentioned below, these assumptions hold widely in practice across many different classifiers. Moreover, these assumptions are nontrivial and do not “assume away the difficulty”. Indeed, there are many natural examples of training algorithms for which these assumptions hold but the generalization gap is large. Last, making some assumptions is necessary for a generalization bound to hold for SSS algorithms; see Remark 3.1 and Appendix E. 2. We also give a theoretical justification for the assumption of a small rationality gap, by proving that a positive rationality gap corresponds to “leaving performance on the table”, in the sense that we can transform a learning procedure with a large rationality gap into a procedure with better test performance (Theorem 3.2). Empirical contributions. We complement the theoretical results above with an extensive empirical study of several SSS and end-to-end algorithms on both the CIFAR-10 and ImageNet datasets. 1. We study several top-performing SSS architectures, and show that they all exhibit relatively small generalization gaps on both CIFAR-10 and ImageNet. We stress that we consider the case where the same data is used for both representation learning and classification, and hence it is by no means a-priori obvious that these algorithms should have small generalization gaps. See Figures 1 and 4 for sample results and Section 4 for more details. 2. We also show that the results of Zhang et al. (2017) do not replicate to SSS algorithms, in the sense that such algorithms, despite using an over-parameterized representation, are not able to fit random label noise. 3. We understake an empirical study of the robustness, rationality, and memorization gaps for both SSS and end-to-end supervised learning algorithms. We show that the robustness and rationality gaps are small for all these algorithms, while the memorization gap is small for SSS algorithms but can be large for end-to-end supervised learning. We show that the RRM bound is typically non-vacuous, and in fact, often close to tight, for a variety of SSS algorithms on the CIFAR-10 and ImageNet datasets, including SimCLR (which achieves test errors close to its supervised counterparts). 4. We demonstrate that replacing the memorization gap with the upper bound of Theorem II yields a non-vacuous generalization bound for a variety of SSS algorithms on CIFAR-10 and ImageNet. Moreover, this bound gets tighter with more data augmentation. Related Work. There are many works on generalization bounds for supervised learning (e.g., Golowich et al. (2018); Neyshabur et al. (2017); Bartlett et al. (2017); Dziugaite & Roy (2017); Neyshabur et al. (2018); Cao & Gu (2019), and references therein). The related work section of Arora et al. (2019) contains an extensive discussion of such bounds, and why more often than not the assumptions used do not hold in practice. Indeed, many such bounds give vacuous guarantees for modern architectures (such as the ones considered in this paper) that have the capacity to memorize their entire training set (Zhang et al., 2017). Some non-vacuous bounds are known; e.g., Zhou et al. (2019) gave a 96.5% bound on the error of MobileNet on ImageNet. Belkin et al. (2019); Nagarajan & Kolter (2019) showed some barriers for generalization gaps for standard end-to-end supervised learning. Similarly, standard approaches such as Rademacher complexity cannot directly bound SSS algorithms’ generalization gap(see Remark 3.1). Recently, Saunshi et al. (2019) and Lee et al. (2020) gave generalization bounds for self-supervised based classifiers. The two works considered special cases of SSS algorithms, such as contrastive learning and pre-text tasks. Both works make strong statistical assumptions of (exact or approximate) conditional independence relating the pre-training and classification tasks. For example, if the pre-training task is obtained by splitting a given image x into two pieces (x1, x2) and predicting x2 from x1, then Lee et al. (2020)’s results require x1 and x2 to be approximately independent conditioned on their class y. However, in many realistic cases, the two parts of the same image will share a significant amount of information not explained by the label. Our work applies to general SSS algorithms without such statistical assumptions, at the expense of assuming bounds on the robustness and rationality gaps. There have been works providing rigorous bounds on the robustness gap or related quantities (See Section 3.). However, as far as we know, the rationality gap has not been explicitly defined or studied before. We provide a brief exposition of the various types of SSS methods in Section 4, and a more detailed discussion in Appendix D.1. Paper Organization. Section 2 contains formal definitions and statements of our results. Section 3 provides an overview of prior work and our new results on the three gaps of the RRM bound. In Section 4, we describe our experimental setup and detail our empirical results. Section 5 concludes the paper and discusses important open questions. We defer proofs and additional experimental results to the appendix. Appendix B contains the proof of Theorem II, while Appendix C contains the proof of Theorem 3.2. Appendix D fully details our experimental setup.1 Notation. We use capital letters (e.g., X) for random variables, lower case letters (e.g., x) for a single value, and bold font (e.g., x) for tuples (which will typically have dimension corresponding to the number of samples, denoted by n). We use xi for the i-th element of the tuple x. We use calligraphic letters (e.g., X ,D) for both sets and distributions. 2 FORMAL STATEMENT OF RESULTS A training procedure is a (possibly randomized) algorithm T that takes as input a train set (x,y) = (xi, yi)i∈[n] ∈ (X×Y)n and outputs a classifier f : X → Y . For our current discussion, we make no assumptions on the type of classifier output or the way that it is computed. We denote the distribution over training sets in (X ×Y)n byDtrain and the distribution over test samples in X ×Y byDtest.2 The generalization gap of a training algorithm T with respect to a distribution pair D = (Dtrain,Dtest) is the expected difference between its train accuracy (which we denote by TrainD,T ) and its test performance (which we denote by TestD,T ). We will often drop subscripts such as D, T when they can be inferred from the context. We will also consider the η-noisy experiment, which involves computing the classifier f̃ = T (x, ỹ) where ỹi = yi with probability 1 − η and is uniform over Y otherwise. Our starting point is the following observation which we call the RRM bound (for Robustness, Rationality, and Memorization). The quantities appearing in it are defined in Table 1 and discussed more in depth in Section 3. Fact I (RRM bound). For every noise parameter η > 0, training procedure T and distribution D = (Dtrain,Dtest) over training sets and test samples, the RRM bound with respect to T and D is, 1We provide our code and data in an anonymous repository on: http://github.com/ICLR2021-rep-gen/. 2The train and test data often stem from the same distribution (i.e., Dtrain = Dntest), but not always (e.g., it does not hold if we use data augmentation). Dtest enters the RRM bound only via the rationality gap, so the assumption of small rationality may be affected if Dtrain 6= Dntest, but the RRM bound still holds. Train− Test︸ ︷︷ ︸ Generalization gap ≤ [ Train− Train(η) ] +︸ ︷︷ ︸ Robustnessgap + [ NTrain(η)− Test ] +︸ ︷︷ ︸ Rationality gap + [ Train(η)− NTrain(η) ] +︸ ︷︷ ︸ Memorization gap where we denote x+ = max(x, 0). The RRM bound is but an observation, as it directly follows from the fact that x+ ≥ x for every x. However, it is a very useful one. As mentioned above, for natural algorithms, we expect both the robustness and rationality components of this gap to be small, and hence the most significant component is the memorization gap. Our main theoretical result is a bound on this gap: Theorem II (Memorization gap bound). Let T = (Tpre, Tfit) be an SSS training procedure obtained by first training Tpre on x ∈ Xn to get a representation r : X → R and then training Tfit on (r(x),y) for y ∈ Yn to obtain a classifier g : R → Y , with the final classifier f : X → Y defined as f(x) = g(r(x)). Then, for every noise parameter η > 0 and distribution D over Xn × Yn: Memorization gap(T ) = TrainT,D(η)− NTrainT,D(η) ≤ O( √ Cη(Tfit) n · 1 η ) where Cη(Tfit) is a complexity measure of the second phase training procedure, which in particular is upper bounded by the number of bits required to describe the classifier g (See Definition 2.3.). 2.1 COMPLEXITY MEASURES We now define three complexity measures, all of which can be plugged in as the measure in Theorem II. The first one, Cmdl, is the minimum description length of a classifier in bits. At a first reading, the reader can feel free to skip the description of the other two measures Cpc and Cdc. These are superficially similar to Rademacher Complexity (cf. Bartlett & Mendelson (2002)) in the sense that they capture the ability of the hypothesis to correlate with random noise but crucially depend on the algorithm used rather than the class of concepts (see Remark 3.1). Definition 2.3 (Complexity of training procedures). Let T be a training procedure taking as input a set (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y), we define the following three complexity measures with respect to r,y, η: • The minimum description length of T is defined as Cmdlr,y,η(T ) := H(g) where we consider the model g as a random variable arising in the η-noisy experiment.3 • The prediction complexity of T is defined as Cpcr,y,η(T ) := ∑n i=1 I(g(ri); ỹi) where the ỹi’s are the labels obtained in the η-noisy experiment. • The (unconditional) deviation complexity of T is defined as Cdcr,y,η(T ) := n · I(g(ri) − yi ; ỹi − yi) where the random variables above are taken over i ∼ [n] and subtraction is done modulo |Y|, identifying Y with the set {0, . . . , |Y| − 1}. 3The name “minimum description length” is justified by the operational definition of entropy relating it to the minimum amortized length of a prefix-free encoding of a random variable. Conditioned on y and the choice of the index i, the deviations g(ri)− yi and ỹi − yi determine the predictions g(ri) and noisy labels ỹi, and vice versa. Hence we can think of Cdc as an “averaged” variant of Cpc, where we make the choice of the index i part of the sample space for the random variables. While we expect the two measures to be approximately close, the fact that Cdc takes i into the sample space makes it easier to estimate this quantity in practice without using a large number of executions (See Figure D.2 for convergence rates.). The measure Cmdl is harder to evaluate in practice, as it requires finding the optimal compression scheme for the classifier. Appendix B contains the full proof of Theorem II. It is obtained by showing that: (i) for every r,y, η, and T it holds that Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdlr,y,η(T ), and (ii) for every SSS algorithm T = (Tpre, Tfit) and distribution D = (Dtrain,Dtest), the memorization gap of T is at most√ CdcTpre(x),y,η(Tfit) / ( η √ 2n ) . (1) It is the quantity (1) that we compute in our experiments. 3 THE THREE GAPS We now briefly describe what is known and what we prove about the three components of the RRM bound. We provide some additional discussions in Appendix E, including “counter-examples” of algorithms that exhibit large values for each one of these gaps. The robustness gap. The robustness gap measures the decrease in training accuracy from adding η noisy labels, measured with respect to the clean labels. The robustness gap and related notions such as noise stability or tolerance have been studied in various works (cf. Frénay & Verleysen (2013); Manwani & Sastry (2013)). Interpolating classifiers (with zero train error) satisfy Train(η) ≥ 1 − η and hence their robustness gap is at most η (See left panel of Figure 2). In SSS algorithms, since the representation is learned without using labels, the injection of label noise only affects the simple classifier, which is often linear. Robustness guarantees for linear classifiers have been given previously by Rudin (2005). While proving robustness bounds is not the focus of this paper, we note in the appendix some simple bounds for least-squares minimization of linear classifiers and the (potentially inefficient) Empirical Risk Minimization algorithm (see Appendices F and G). Empirically, we observe that the robustness gap of SSS algorithms is often significantly smaller than η. (See left panels of Figure 2 and Figure 3.) The memorization gap. The memorization gap corresponds to the algorithm’s ability to fit the noise (i.e., the gap increases with the number of fit noisy labels). If, for example, the classifier output is interpolating, i.e., it satisfies f(xi) = ỹi for every i, then accuracy over the noisy samples will be 0 (since for them yi 6= ỹi). In contrast, the overall accuracy will be in expectation at least 1−η which means that the memorization gap will be≈ 1 for small η. However, we show empirically (see right panels of Figures 2 and 3) that the memorization gap is small for many SSS algorithms and prove a bound on it in Theorem II. When combined with small rationality and robustness, this bound results in non-vacuous generalization bounds for various real settings (e.g., 48% for ResNet101 with SimCLRv2 on ImageNet, and as low as 4% for MoCo V2 with ResNet-18 on CIFAR-10). Moreover, unlike other generalization bounds, our bound decreases with data augmentation (See Figure 5.). Remark 3.1 (Memorization vs. Rademacher complexity). The memorization gap, as well the complexity measures defined in Section 2.1 have a superficial similarity to Rademacher complexity (Bartlett & Mendelson, 2002), in the sense that they quantify the ability of the output classifier to fit noise. One difference is that Rademacher complexity is defined with respect to 100% noise, while we consider the η-noisy experiment for small η. A more fundamental difference is that Rademacher complexity is defined via a supremum over all classifiers in some class. The final classifiers of SSS algorithms are obtained by a composition of the complex representation and simple classifier. This composed classifier will in general have high Radamacher complexity, and in particular we would not be able to prove non-vacuous bounds on it using Radamacher complexity. We cannot ignore the complexity of the representation in Radamacher-complexity based analysis of SSS algorithms since the representation is learned using the same data that is later used for classification. In fact, there are examples of SSS algorithms with simple classifiers that have large generalization gaps (see Section 3.1). This shows that Radamacher complexity bounds for the class of simple classifiers cannot, on their own, be used to derive generalization bounds. Zhang et al. (2017) demonstrated a lower bound on the Radamacher complexity of modern deep networks, by showing that modern end-to-end supervised learning algorithm can fit 100% of their label noise. Our experiments show that this is not the case for SSS algorithms, which can only fit 15%-25% of the CIFAR-10 training set when the labels are completely random (See Table D.1 in the appendix.). However, absence of evidence is not evidence of absence, and the fact that empirically SSS algorithms do not fit the noise, does not imply that the Radamacher complexity of the resulting class is small, nor does it, by its own, automatically imply a small generalization gap. 3.1 THE RATIONALITY GAP Unlike the other quantities defined above, the rationality gap is novel and less intuitive, and so we discuss it more in depth. The rationality gap, like all other quantities in the RRM bound, applies to any learning procedure and not only to SSS algorithms. Indeed, our empirical results show that rationality is typically small for both SSS and end-to-end algorithms, and so it is not this gap but rather the memorization gap that accounts for the difference in their generalization behavior. To build intuition for the rationality gap, consider an example of a training procedure T that on input a train set S, has 70% test accuracy and a 10% rationality gap with noise parameter η = 5%. In the η-noisy experiment, the classifier f̃ output by T recovers the original uncorrupted label for 80% of the ≈ η ·n datapoints for which it received the wrong labels. In contrast, 10% rationality gap means the same classifier will only succeed in recovering the label of 70% of unseen test samples. Intuitively, such a classifier is being “irrational” or “inconsistent” in the sense that it succeeds better on datapoints on which it was given the wrong label, then on datapoints on which it was given no label at all. (In error-correcting code parlance, it handles corruption errors better than erasure errors.) We can turn this intuition into a formal argument, by giving a transformation from such a training algorithm T to an algorithm T ′ that achieves roughly 80% test accuracy. On input a fresh unseen datapoint x, the algorithm T ′ chooses a random label ỹ ∼ Y , runs T on the train set S ∪ {(x, ỹ)} to obtain some classifier f̃ , and outputs f̃(x). Up to low-order terms, T ′ will achieve test accuracy at least as good as the performance of T on noisy datapoints, which is 80%. The above reasoning leads to the proof of the following theorem (see also Appendix C): Theorem 3.2 (Performance on the table theorem, informal). For every training procedure T and distribution Dtest, Dtrain = Dntest, there exists a training procedure T ′ satisfying TestT ′ ≥ TestT + rationality gap(T )− robustness gap(T )− o(1). Why do natural algorithms have a small rationality gap? Empirically, the rationality gap is often small or zero for both SSS and end-to-end supervised learning algorithms, particularly for betterperforming ones. (See middle panels of Figure 2 and Figure 3.) Theorem 3.2 provides an “economic explanation” for this phenomenon: a rational agent would not use a classifier with a positive rationality gap since this amounts to “leaving performance on the table”. However, this transformation comes at a high computational cost; inference for the classifier produced by T ′ is as expensive as retraining from scratch. Hence Theorem 3.2 does not fully explain why natural algorithms tend to have small rationality gap. In this paper we take low rationality gap as an empirically-justifiable assumption. We believe that both proving that natural algorithms have small rationality gaps, as well as coming up with computationally efficient transformations to extract performance from rationality gaps, are important open questions. Does assuming small rationality gap trivialize generalization? Since the definition of the rationality gap involves the test accuracy, the reader might wonder if assuming small rationality is not tantamount to assuming a small generalization gap. However, there is nothing “irrational” about a large generalization gap, and indeed many excellent classifiers have 100% train accuracy. In contrast, it is irrational to “leave performance on the table” and use a classifier with test accuracy pwhen it can be transformed into one with significantly better accuracy. Concretely, our empirical studies show that the rationality gap is uniformly small, even for end-to-end classifiers that have large generalization gaps. Hence, by itself, rationality is not enough to guarantee small generalization gap. Is assuming small rationality gap even needed? Since SSS algorithms use simple classifiers, the reader may wonder why we need the small-rationality gap assumption and cannot directly prove generalization bounds using standard tools such as Rademacher complexity. The issue is that the representation used by SSS algorithms is still sufficiently over-parameterized to allow memorizing the training set. As a pedagogical example, consider a representation-learning procedure that maps a label-free training set x to a representation r : X → R under which the differently labeled x’s are linearly separable. Moreover, suppose that the representation space has dimension much smaller than n, and hence a linear classifier would have small complexity under any reasonable measure. Without access to the labels, we can transform r to a representation r′ that on input x outputs r(x) if x is in the training set, and outputs the all-zero vector (or another trivial value) otherwise. Given sufficiently many parameters, the representation r′ (or a close-enough approximation) can be implemented by a neural network. Since r and r′ are identical on the training set, a learning procedure using r′ will have the same train accuracy and (small) memorization gap. However, the generalization gap of such a procedure will be large, since it will not achieve better than trivial accuracy on unseen test examples. The issue here is not that the representation “memorizes” the train set. Representations of practical SSS algorithms are highly over-parameterized and are quite likely to memorize specific aspects of the training set. Rather, the issue is that the representation artificially behaves differently on test points in a way that decreases its performance. It is the latter property that makes the classifier “irrational”, and violates the small rationality gap assumption. 4 EMPIRICAL STUDY OF THE RRM BOUND In support of our theoretical results, we conduct an extensive empirical study of the three gaps and empirically evaluate the bound from Equation (1) for a variety of SSS algorithms for the CIFAR10 and ImageNet datasets. We provide a summary of our setup and findings below. For a full description of the algorithms and hyperparameters, see Appendix D. SSS Algorithms (Tpre, Tfit). We consider various self-supervised training algorithms that learn a representation without explicit training labels. In our study, we include methods based on contrastive learning such as Instance Discrimination (Wu et al., 2018), MoCoV2 (He et al., 2020), SimCLR (Chen et al., 2020a;b), AMDIM (Bachman et al., 2019), CMC (Tian et al., 2019), InfoMin (Tian et al., 2020) as well as adversarial methods such as BigBiGAN (Donahue & Simonyan, 2019). For the second phase of training (also known as the evaluation phase (Goyal et al., 2019)), we consider simple models such as regularized linear regression, or small Multi-Layer Perceptrons (MLPs). For each evaluation method, we run two experiments: 1) the clean experiment where we train Tfit on the data and labels (x,y); 2) the η-noisy experiment where we train Tfit on (x, ỹ) where ỹ are the η noised labels. Unless specified otherwise we set the noise to η = 5%. Adding augmentations. We investigate the effect of data augmentation on the three gaps and the theoretical bound. For each training point, we sample t random augmentations (t = 10 unless stated otherwise) and add it to the train set. Note that in the noisy experiment two augmented samples of the same original point might be assigned with different labels. We use the same augmentation used in the corresponding self-supervised training phase. Results. Figures 1 and 2 provide a summary of our experimental results for CIFAR-10. The robustness and rationality gaps are close to zero for most SSS algorithms, while the memorization gap is usually the dominant term, especially so for models with larger generalization gap. Moreover, we see that Cdc often produces a reasonably tight bound for the memorization gap, leading to a generalization bound that can be as low as 5-10%. In Figures 3 and 4 we give a summary of our experimental results for SSS algorithms on ImageNet. Again, the rationality and robustness gaps are bounded by small constants. Notice, that adding augmentations reduces memorization, but may lead to an increase in the rationality gap. This is also demonstrated in Figure 5 where we vary the number of data augmentations systematically for one SSS algorithm (AMDIM) on CIFAR-10. Since computing the Theorem II bound for ImageNet is computationally expensive (See Appendix D.5.1.) we compute it only for two algorithms, which achieve a non-vacuous generalization bound of 48%. 5 CONCLUSIONS AND OPEN QUESTIONS This work demonstrates that SSS algorithms have small generalization gaps. While our focus is on the memorization gap, our work motivates more investigation of both the robustness and rationality gaps. In particular, we are not aware of any rigorous bounds for the rationality gap of SSS algorithms, but we view our “performance on the table” theorem (Theorem 3.2) as a strong indication that it is close to zero for natural algorithms. Given our empirical studies, we believe the assumptions of small robustness and rationality conform well to practice. Our numerical bounds are still far from tight, especially for ImageNet, where evaluating the bound (more so with augmentations) is computationally expensive. Nevertheless, we find it striking that already in this initial work, we get non-vacuous (and sometimes quite good) bounds. Furthermore, the fact that the empirical RRM bound is often close to the generalization gap, shows that there is significant room for improvement. Overall, this work can be viewed as additional evidence for the advantages of SSS algorithms over end-to-end supervised learning. Moreover, some (very preliminary) evidence shows that end-toend supervised learning implicitly separates into a representation learning and classification phases (Morcos et al., 2018). Understanding the extent that supervised learning algorithms implicitly perform SSS learning is an important research direction in its own right. To the extent this holds, our work might shed light on such algorithms’ generalization performance as well. 6 ACKNOWLEDGEMENTS We thank Dimitris Kalimeris, Preetum Nakkiran, and Eran Malach for comments on early drafts of this work. This work supported in part by NSF award CCF 1565264, IIS 1409097, DARPA grant W911NF2010021, and a Simons Investigator Fellowship. We also thank Oracle and Microsoft for grants used for computational resources. Y.B is partially supported by MIT-IBM Watson AI Lab. Work partially performed while G.K. was an intern at Google Research. A MUTUAL INFORMATION FACTS Lemma A.1 . If A,B are two Bernoulli random variables with nonzero expectation then |E[A|B = 1]− E[A]| ≤ √ 1 2I(A;B)/E[B] Proof. A standard relation between mutual information and KL-divergence gives I(A;B) = DKL(pA,B ||pApB). On the other hand, by the Pinsker inequality, sup S⊆{0,1}×{0,1} |pA,B(S)− pA×B(S)| ≤ √ 1 2 DKL(pA,B ||pApB) = √ 1 2 I(A,B). Thus (letting S = {(1, 1)}), |Pr[A = 1, B = 1]−Pr[A = 1]Pr[B = 1]| ≤ √ 1 2I(A,B). Consequently, |E[A|B = 1]− E[A]| ≤ √ 1 2I(A,B))/E(B) Lemma A.2 . For three random variables W,X, Y , s.t. X and Y are independent, I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ) Proof. Using the chain rule for mutual information we have: I(W ;X,Y ) = I(W ;X) + I(W ;Y |X) Since X,Y are independent, H(Y |X) = H(Y ) and since conditioning only reduces entropy, we have H(Y |W,X) ≤ H(Y |W ). Combining the two we get, I(W ;Y |X) = H(Y |X)−H(Y |W,X) ≥ H(Y )−H(Y |W ) = I(W ;Y ) Thus we have that I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ). Note that by induction we can extend this argument to show that I(W ;X1, ..., Xn) ≥ ∑ I(W ;Xi) where Xi are mutually independent. B SIMPLE CLASSIFIERS IMPLY SMALL MEMORIZATION GAP In this appendix we we prove our main theoretical result (Theorem B.4). We will start by giving a formal definition of SSS algorithms and restating the definition of our complexity measures. Definition B.1 (SSS Algorithms, restated). An SSS algorithm over (X × Y)n is a procedure T = (Tpre, Tfit) that takes as input a set (x,y) and operates as follows: 1. Tpre takes the (label free) data points x as input and outputs a representation r : X → R for some setR; 2. On input the points {(r(xi), yi)}ni=1, Tfit outputs a simple classifier g : R :→ Y; 3. The output is a classifier f : X → Y defined as f(x) = g(r(x)) for every x ∈ X . We now restate the definitions of our complexity measure. Definition B.2 (Complexity of training procedures, restated). Let T be a training procedure taking as input (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y): • The minimum description length of T with respect to r,y, η is defined as Cmdlr,y,η(T ) = H(g) where g is the random variable T (r, ỹ) in the η noisy experiment. • The prediction complexity of T with respect to r,y, η is defined as, Cpcr,y,η(T ) := n∑ i=1 I(g(ri); ỹi) where g(ri) and ỹi are viewed as random variables over the sample space induced by choosing ỹ according to the η-noisy experiment w.r.t. y and letting g = T (x, ỹ). • The deviation complexity of T with respect to r,y, η is defined as Cdcr,y,η(T ) := n · I(∆;N) where ∆ = g(ri) − yi (mod |Y|) and N = ỹi − yi (mod |Y|) are random variables taken over both the above sample space and the choice of i ∼ [n] and identifying Y with {0, . . . , |Y| − 1}. The following theorem shows that Cdc is upper bounded by Cpc, which in turn is bounded by the operational entropy of g. Theorem B.3 (Relation of complexity measures). For every r,y, η > 0, and T Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdl(T ) where g is the classifier output by T (considered as a random variable). Proof. Fix T, r,y, η. We get ỹ by choosing i.i.d random variables N1, . . . , Nn, each equalling 0 with probability 1− η and uniform otherwise, and letting ỹi = yi +Ni (mod |Y|). We start by proving the second inequality Cpcr,y,η(T ) ≤ H(g). Let g = T (r, ỹ) and define p = (g(r1), . . . , g(rn)) be the vector of predictions. Then, Cpcr,y,η(T ) = ∑ i I(pi; ỹi) = ∑ i I(pi;Ni) (2) with the last equality holding since for fixed yi, Ni determines ỹi and vice versa. However, since the full vector p contains only more information than pi, the right-hand side of (2) is at most∑n i=1 I(p;Ni) ≤ I(p ; N1, . . . , Nn), using the fact that Ni random variables are independent (see Lemma A.2). For a fixed r, the value of p is completely determined by g and hence the entropy of p is at most H(g), establishing the second inequality of the theorem. We now turn to the first inequality Cdcr,y,η(T ) ≤ Cpcr,y,η(T ). Let ∆i = pi − yi (mod |Y|). Then, 1 nC pc r,y,η(T ) = E j∼[n] I(pj ;Nj) = E j∼[n] I(∆j ;Nj) (3) since pi determines ∆i and vice versa. But, since Nj = N |i = j and ∆j = ∆|i = j (where N,∆ are the random variables defined in Definition B.2), the right-hand side of (3) equals E j∼[n] I(∆;N |i = j) = E j∼[n] H(N |i = j)−H(N |∆, i = j) . (4) Since N1, . . . , Nn are identically distributed, H(N |i = j) = H(N) which means that the righthand side of (4) equals H(N)− E j∼[n] H(N |∆, i = j) ≥ H(N)−H(N |∆) = I(∆;N) with the inequality holding since on average conditioning reduces entropy. By definition I(∆;N) = 1 nC dc r,y,η(T ), establishing what we wanted to prove. The complexity measures Cpc and Cdc are defined with respect to a fixed train set (r,y), rendering them applicable for single training sets such as CIFAR-10 and ImageNet that arise in practice. If D is a distribution over (r,y), then we define the complexity measures Cpc and Cdc with respect toD as the average of the corresponding measure with respect to (r,y) ∼ D. We now restate Theorem II: Theorem B.4 (Theorem II, restated). Let T = (Tpre, Tfit) be a training procedure obtained by first training Tpre on x ∈ Xn to obtain a representation r : X → R and then training Tfit on (r(x),y)) where y ∈ Yn to obtain a classifier g : R → Y . Then, for every noise parameter η > 0 and distribution Dtrain over (X ,Y)n, Memorization gap(T ) = TrainDtrain,T (η)− NTrainDtrain,T (η) ≤ √ CdcDr,η(Tfit) 2n · 1 η where Dr is the distribution over (R×Y)n induced by Tpre on Dtrain. Note that the bound on the right-hand side is expressed only in terms of the complexity of the second stage Tfit and is independent of the complexity of Tpre. The crux of the proof is showing (close to) independence between the corrupted indices and prediction deviation of g resulting from the noise. Proof. Let (r,y) be sampled by first drawing (x,y) ∼ Dtrain over (X×Y)n then applying r = r(x) where r = Tpre(x). Consider the sample space of sampling ỹ according to the η-noisy distribution with respect to Y , computing g = Tfit(r, ỹ), and sampling i ∼ [n]. We define the following two Bernoulli random variables over this sample space: Z = 1∆=0 = { 1 g(Ri) = yi 0 otherwise ; B = 1N 6=0 = { 1 ỹi 6= yi 0 otherwise . For a given r,y, since Z is determined by ∆ and B is determined by N , I(Z;B) ≤ I(∆;N) = Cdcr,y,η(Tfit)/n. By Lemma A.1, for every Bernoulli random variables B,Z |E[Z]− E[Z|B = 1]| ≤ √ 1 2I(Z;B)/E[B] And hence in our case (since E[B] = η), E[Z]− E[Z|B = 1] ≤ √ Cdcr,y,η(Tfit) 2n · 1 η . But E[Z] corresponds to the probability that g(r) = y for (r, y) in the train set, while E[Z|B = 1] corresponds to this probability over the noisy samples. Hence the memorization gap is bounded by E (r,y)∼Dr [√ Cdcr,y,η(Tfit) 2n · 1 η ] ≤ 1η √ E (r,y)∼Dr [ Cdcr,y,η(Tfit) 2n ] = √ CdcR,η(Tfit) 2n · 1 η using the Jensen inequality and the concavity of square root for the first inequality. C POSITIVE RATIONALITY GAP LEAVES ROOM FOR IMPROVEMENT In this appendix, we prove the “performance on the table theorem” that states that we can always transform a robust training procedure with a positive rationality gap into a training procedure with better performance: Theorem C.1 (Performance on the table theorem, restated). For every training procedure T and Dtest, n, η, if Dtrain = Dntest there exists a training procedure S such that TestS,D,n ≥ NTrainT,D,n(η)− o(1) (5) where o(1) is a term that vanishes with n, and under the assumption that TrainT,D,n(η) ≥ NTrainT,D,n(η). For any reasonable training procedure T , performance on noisy train samples will not be better than the overall train accuracy, and hence the assumption will be satisfied. In particular (since we can always add noise to our data), the above means that we can obtain a procedure S′ whose clean test performance is at least TestT + ∆ where ∆ = NTrainT (η)− TestT is the rationality gap of T . Hence if the rationality gap is larger than the robustness gap, we can use the above to improve the test performance of “irrational” networks. (Note that the robustness gap of almost all standard training procedure is at most η and in fact often much smaller.) We stress that the procedure of Theorem 3.2, while running in “polynomial time”, is not particularly practical, since it makes inference be as computationally expensive as training. However, it is a proof of concept that irrational networks are, to some extent, “leaving performance on the table”. Proof. Let T be a procedure as above. Our algorithm S would be the following: • Training: The algorithm will not do any training but on input labelsD = {(xi, ỹi)} simply stores these labels. • Inference: On input a data point x, Algorithm S will choose i ∈ [n] at random, and run T on the data D replacing the i-th sample with (x, ỹ) where ỹ is chosen uniformly at random. The output is f(x) where f is the classifier output by T First note that while the number of noisy samples could change by one by replacing (xi, yi) with (x, ỹ), since this number is distributed according to the Binomial distribution with mean ηn and standard deviation √ (1− η)ηn 1, this change can affect probabilities by at most o(1) additive factor. If Y has k classes, then with probability 1− 1/k we will make (x, ỹ) noisy (y 6= ỹ) in which case the expected performance on it will be NTrainT (η). With probability 1/k, we choose the correct label y in which case performance on this sample will be equal to the expected performance on clean samples which by our assumptions is at least NTrainT (η) as well. D EXPERIMENTAL DETAILS We perform an empirical study of the RRM bound for a wide variety of self-supervised training methods on the ImageNet (Deng et al., 2009) and CIFAR-10 (Krizhevsky et al., 2009) training datasets. We provide a brief description of all the self-supervised training methods that appear in our results below. For each method, we use the official pre-trained models on ImageNet wherever available. Since very few methods provide pre-trained models for CIFAR-10, we train models from scratch. The architectures and other training hyper-parameters are summarized in Table H.4 and Table H.3. Since our primary aim is to study the RRM bound, we do not optimize for reaching the state-of-the-art performance in our re-implementations. For the second phase of training, we use L2-regularized linear regression, or small non-interpolating Multi-layer perceptrons (MLPs). D.1 SELF-SUPERVISED TRAINING METHODS (TPRE) There are a variety of self-supervised training methods for learning representations without explicit labels. The two chief classes of self-supervised learning methods are: 1. Contrastive learning: These methods seek to find an embedding of the dataset that pushes a positive pair of images close together and a pair of negative images far from each other. For example, two different augmented versions of the same image may be considered a positive pair, while two different images may be considered a negative pair. Different methods such as Instance Discrimination, MoCo, SimCLR, AMDIM, differ in the the way they select the positive/negative pairs, as well other details like the use of a memory bank or the encoder architecture. (See Falcon & Cho (2020) for detailed comparison of these methods). 2. Handcrafted pretext tasks: These methods learn a representation by designing a fairly general supervised task, and utilizing the penultimate or other intermediate layers of this network as the representation. Pretext tasks include a variety of methods such as predicting the rotation angle of an input image (Gidaris et al., 2018), solving jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016), denoising images (Vincent et al., 2008) or image inpainting (Pathak et al., 2016). Additionally, adversarial image generation can be used for by augmenting a the image generator with an encoder (Donahue & Simonyan, 2019). We focus primarily on contrastive learning methods since they achieve state-of-the-art performance. We now describe these methods briefly. Instance Discrimination: (Wu et al., 2018) In essence, Instance Discrimination performs supervised learning with each training sample as a separate class. They minimize the non-parametric softmax loss given below for the training dataset. J(θ) = − n∑ i=1 log ( exp(vTi v/τ)∑n j=1 exp(v T i v/τ) ) (6) where vi = fθ(xi) is the feature vector for the i-th example. They use memory banks and a contrastive loss (also known as Noise Contrastive Estimation or NCE (Gutmann & Hyvärinen, 2010)) for computing this loss efficiently for large datasets. So in this case, a positive pair is an image and itself, while a negative pair is two different training images. Momentum Contrastive (MoCo): (He et al., 2020) MoCo replaces the memory bank in Instance Discrimination with a momentum-based query encoder. MoCoV2 (Chen et al., 2020c) uses various modifications from SimCLR, like a projection head, and combines it with the MoCo framework for improved performance. AMDIM: (Bachman et al., 2019) AMDIM uses two augmented versions of the same image. For these augmentations, they use random resized crops, random jitters in color space, random horizontal flip and random conversion to grayscale. They apply the NCE loss across multiple scales, by using features from multiple layers. They use a modified ResNet by changing the receptive fields to decrease overlap between positive pairs. CMC: (Tian et al., 2019) CMC creates two views for contrastive learning by converting each image into the Lab color space. L and ab channels from the same image are considered to be a positive pair, while those from two different images are considered to be a negative pair. PiRL: (Misra & Maaten, 2020) PiRL first creates a jigsaw transformation of an image (it divides an image into 9 patches and shuffles these patches). It treats an image and its jigsaw as a positive pair, and that of a different image as a negative pair. They additionally modify encoder on the jigsaw branch. SimCLRv1 and SimCLRv2: (Chen et al., 2020a;b) SimCLR also use strong augmentations to create positive and negative pairs. They use random resized crops, random Gaussian blur and random jitters in color space. Crucially, they use a projection head that maps the representations to a 128- dimensional space where they apply the contrastive loss. They do not use a memory bank, but use a large batch size. InfoMin: InfoMin uses random resized crop, color jitter and gaussian blur, as well as jigsaw shuffling from PiRL. D.2 SIMPLE CLASSIFIER (TFIT) After training the representation learning method, we extract representations r for the training and test images. We do not add random augmentations to the training images (unless stated otherwise). Then, we train a simple classifier on the dataset {r(xi), yi}ni=1. We use a linear classifier in most cases, but we also try a small multi-layer perceptron (as long as it has few parameters and does not interpolate the training data). We add weight decay in some methods to achieve good test accuracy (See Table H.4 for values for each method.) For the noisy experiment, we set the noise level to η = 5%. To compute the complexity bound Cdc we run 20 trials of the noisy experiment for CIFAR10 and 50 trials for ImageNet. D.3 EXPERIMENTAL DETAILS FOR EACH PLOT Figure 1. This figure shows the robustness, rationality and memorization gap for various SSS algorithms trained on CIFAR-10. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.3. For the second phase Tfit, we use L2regularized linear regression for all the methods. For each algorithm listed in Table H.3, the figure contains 2 points, one without augmentations, and one with augmentations. Further, we compute the complexity measure Cdc for all the methods. All the values (along with the test accuracy) are listed in Table H.1. Figure 2. This figure shows the robustness, rationality and memorization for CIFAR-10 for all the same methods as in Figure 1. We only include the points without augmentation to show how rationality behaves when (Dtrain,Dtest) are identical. All the values (along with the test accuracy) are listed in Table H.1. For the supervised architectures, we train a Myrtle-5 (Page, 2018) convolutional network, a ResNet-18 (He et al., 2016) and a WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with standard training hyperparameters. Figure 3 and Figure 4. These figures show the robustness, rationality and memorization for the ImageNet dataset. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.4. For the second phase Tfit, we use L2-regularized linear regression for all the methods. The figures also contain some points with 10 augmentations per training image. Further, we compute the complexity measure Cdc for all three methods - SimCLRv2 with architectures ResNet-50-1x and ResNet-101-2x. All the values (along with the test accuracy) are listed in Table H.2. Figure 5 This figure shows the effect of increasing augmentations. We add t = {2, ..., 10} augmentations and re-train the simple classifier. We do this for the CIFAR-10 dataset, AMDIM selfsupervised training with the AMDIM encoder and linear regression (See Table H.3 for the hyperparameters). D.4 ADDITIONAL RESULTS D.4.1 GENERALIZATION ERROR OF SSS ALGORITHMS To show that SSS algorithms have qualitatively different generalization behavior compared to standard end-to-end supervised methods, we repeat the experiment from Zhang et al. (2017). We randomize all the training labels in the CIFAR-10 dataset and train 3 high-performing SSS methods on these noisy labels. For results see Table D.1. Unlike fully supervised methods, SSS algorithms do not achieve 100% training accuracy on the dataset with noisy labels. In fact, their training accuracies are fairly low (≈ 15-25%). This suggests that the empirical Rademacher complexity is bounded. The algorithms were trained without any augmentations during the simple fitting phase for both SSS and supervised algorithms. The SSS methods were trained using parameters described in Table H.3. D.5 RRM BOUND WITH VARYING NOISE PARAMETER We now investigate the effect of varying noise levels on the three gaps as well as on the complexity. We see that the robustness gap increases as we add more noise—this is expected as noise should affect the clean training accuracy. We also observe that the memorization gap decreases, suggesting that Cdcη as a function of η goes down faster than η 2 (see Appendix B). The Theorem II bound on memorization gap also decays strongly with the η, becoming more tight as the noise increases. D.5.1 CONVERGENCE OF COMPLEXITY MEASURES We now plot the complexity measures Cdc and Cpc with increasing number of trials for one of the SSS algorithms. As expected, Cdc < Cpc and Cdc converges in about 20 trials for CIFAR-10. On the other hand, the complexity computations for ImageNet need many more trials for convergence, since it contains about 10 augmentations×1.2 million training samples making it cost prohibitive to compute for all the methods. For the CIFAR-10, we use AMDIM with the AMDIM encoder architecture without augmentations. For ImageNet, we use SimCLRv2 with the ResNet-101 architecture with 10 augmentations per training sample. E EXAMPLES OF ALGORITHMS WITH LARGE GAPS While we argued that SSS algorithms will tend to have small robustness, rationality, and memorization gaps, this does not hold in the worst case and there are examples of such algorithms that exhibit large gaps in each of those cases. E.1 LARGE ROBUSTNESS GAP Large robustness gap can only arise via computational (as opposed to statistical) considerations. That is, if a training procedure outputs a classifier f ∈ F that achieves on average accuracy α on a clean train set (X,Y ), then with high probability, if (X, Ỹ ) is an η-noisy train set then there exists f ∈ F that achieves α(1− η) accuracy on this train set (by fitting only the “clean” points). However, the training algorithm might not always be able to find such a classifier. For example, if the distribution has the form (x, y) = (x, ∑ ajxj mod 2) where x ∼ GF (2)` = Z`2 and a ∈ GF (2)` is some hidden vector, then there is an efficient algorithm (namely Gaussian elimination) to find a given the samples (x, y) and hence get accuracy 1. However, for every ε > 0 and η > 0, there is no known efficient algorithm that, given a 1− η perturbed equations of the form {〈a, xi〉 = ỹi}i∈[n] finds a′ ∈ GF (2)` such that ∑ a′jxj = ∑ ajxj mod 2 on a 1/2 + ε fraction of the x’s. This is known as the learning parity with noise (LPN) problem (Blum et al., 1993). The assumption of robustness is necessary for a small generalization gap, in the sense that we can come up with (contrived) examples of algorithms that have small rationality and memorization gaps while still having large generalization gap. For example, consider an algorithm T that has large generalization gap (high train accuracy and small test accuracy) , and suppose we augment to the following algorithm T ′(x,y) = { T (x,y) if y is “clean” 0 if y is “noisy” where 0 denotes the constant zero function (e.g., some trivial classifier) and we use some algorithm to estimate whether or not the labels are noisy. (Such estimates can often be achieved in many natural cases.) The algorithm T ′ will inherit the generalization gap of T , since that depends only on the experiment without noise. Since performance on noisy and clean training samples will be the same (close to random), will have zero memorization gap. Since we have assumed small test accuracy, it will have zero rationality gap also. E.2 LARGE RATIONALITY GAP As discussed in Section C, in the case that Dtrain = Dntest, a robust algorithm with large rationality gap leaves “performance on the table”. We can obtain such algorithms by artificially dropping performance on the test data. For example, in the SSS framework, since the representation r is over-parameterized and can memorize the entire train set, we can consider the trivial representation r(x) = { x x in train set 0 otherwise If we now train some simple classifier on r(x) then it can have non-trivial performance on the noisy train samples, while getting trivial accuracy on all samples outside the train set. In cases where Dtrain and Dtest are different (for example when Dtrain is an augmented version of Dtest) then we can no longer claim that a large rationality gap corresponds to “leaving performance on the table”. For example, we do observe (mild) growth in the rationality gap as we add more augmented points to the training set. E.3 LARGE MEMORIZATION GAP It is not hard to find examples of networks with large memorization gap. Indeed, as mentioned before, any standard interpolating supervised learning algorithm will get a memorization gap close to 1. F ROBUSTNESS OF LEAST SQUARES CLASSIFIERS One can prove robustness for classes of algorithms under varying assumptions. As a simple example, we record here a self-contained observation of how margin leads to robustness in least squares minimization. (We believe that this bound is folklore, but weren’t able to find the right reference.) This is a very simple but also pessimistic bound, and much better ones often hold. Lemma F.1 . Let x1, . . . , xn ∈ Rd and y1, . . . , yn ∈ [k], and consider a linear function f : Rd → Rk that minimizes the quantity ∑ i∈[n],j∈[k] |f(xi)j−1yi=j |2, and suppose that for p fraction of the i’s, the maximum over j ∈ [k] of f(xi) is γ larger than the second-largest value. Then in expectation, if we let ỹ be the η-noisy version of y and f̃ minimizes ∑ i∈[n],j∈[k] |f̃(xi)j − 1ỹi=j |2, we get that arg maxj f̃(xi) = yi for at least p− 4η/γ2 fraction of the i’s. Proof. We identify y with its “one hot” encoding as a vector in Rnk. Let V ⊆ Rnk be the subspace of all vectors of the form (g(x1), . . . , g(xn)) for linear g : Rd → Rk. If f is the minimizer in the theorem statement, and p = (f(x1), . . . , f(xn)) then p = ΠV y where ΠV is the orthogonal projection to the subspace v. If f̃ is the minimizer for the noisy labels and p̃ = (f̃(x1), . . . , f̃(xn)), then p̃ = ΠV ỹ = ΠV (y + e) where e is the noise vector ỹ − y. Hence ‖p − p̃‖ = ‖ΠV e‖ ≤ ‖e‖. But in expectation ‖e‖2 ≤ 2ηn (since we flip a label with probability ≤ η). For every point i for which the margin was at least γ in p, if p̃’s prediction is different in i, then the contribution of the i-th block to their square norm difference is at least γ2/2 (by shifting the maximum coordinate by −γ/2 and the second largest one by γ/2). Hence at most 4ηn/γ2 of these points could have different predictions in p and p̃ G ROBUSTNESS OF EMPIRICAL RISK MINIMIZER The (potentially inefficient) algorithm that minimizes the classification errors is always robust. Lemma G.1 . Let T (x,y) = arg minf∈F ∑n i=1 1f(xi)6=yi . Then for every η > 0, Robustness gap(T ) ≤ 2η . Proof. Let x,y be any train set, and let α = ming∈F ∑n i=1 1g(xi)6=yi and f be the minimizer of this quantity. Let ỹ be the η-noisy version of y and let η̃ be the fraction of i on which yi 6= ỹi. Then, n∑ i=1 1f(xi)6=yi ≤ α+ η̃ . (7) Hence if f̃ is the minimizer of (7) then we know that f̃(xi) 6= ỹi for at most α + η̃ fraction of the i’s, and so f̃(xi) 6= yi for at most α + 2η̃ fraction of the i’s. Since the train accuracy of T is 1− α and in expectation of η̃ is η, we get that in expectation TrainT (η) ≥ TrainT − 2η H LARGE TABLES Table H.1 – Summary of all the methods, architectures and the corresponding results (gaps and accuracies) on CIFAR-10, sorted by generalization gap. While Figure 1 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc mocov2 resnet18 True -7.35 0.07 0.21 0.00 3.47 0.28 67.19 mocov2 wide resnet50 2 True -6.37 0.18 1.03 0.00 7.63 1.21 70.99 mocov2 resnet101 True -6.01 0.15 0.71 0.00 6.38 0.86 68.58 mocov2 resnet50 True -5.38 0.19 0.84 0.00 6.99 1.03 69.68 simclr resnet50 True -2.89 0.30 0.55 0.00 6.63 0.85 91.96 amdim resnet101 True -0.91 0.64 3.70 0.00 25.99 4.34 63.56 amdim resnet18 True 0.33 0.23 1.15 0.00 8.66 1.38 62.84 mocov2 resnet18 False 1.43 0.15 1.24 0.03 14.14 1.43 67.60 simclr resnet18 False 1.43 0.28 0.79 0.36 13.35 1.43 82.50 amdim wide resnet50 2 True 1.60 0.69 2.46 0.00 19.20 3.15 64.38 simclr resnet50 False 1.97 0.22 0.78 0.97 15.75 1.97 92.00 simclr resnet50 False 2.24 0.52 1.71 0.01 19.53 2.24 84.94 mocov2 resnet50 False 2.72 0.30 2.96 0.00 24.18 3.26 70.09 mocov2 resnet101 False 2.82 0.33 3.03 0.00 22.78 3.36 69.08 mocov2 wide resnet50 2 False 3.11 0.38 2.79 0.00 22.39 3.18 70.84 amdim resnet50 bn True 3.69 0.84 4.22 0.00 31.12 5.06 66.44 amdim resnet18 False 4.34 0.42 4.58 0.00 33.47 5.00 62.28 amdim amdim encoder True 4.43 0.68 0.36 3.39 10.32 4.43 87.33 amdim amdim encoder False 6.68 2.08 5.69 0.00 70.52 7.77 87.38 amdim resnet101 False 12.46 1.22 14.26 0.00 100.00 15.49 62.43 amdim wide resnet50 2 False 13.07 1.70 15.33 0.00 100.00 17.03 63.80 amdim resnet50 bn False 14.73 1.81 16.63 0.00 100.00 18.43 66.28 Table H.2 – Summary of all the methods, architectures their corresponding results (gaps and accuracies) on ImageNet, sorted by generalization gap. While Figure 4 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc simclrv2 r50 1x sk0 True -2.34 0.26 0.68 0.00 46.93 0.94 70.96 simclrv2 r101 2x sk0 True 0.63 0.10 0.80 0.00 47.90 0.91 77.24 simclrv2 r152 2x sk0 True 1.00 0.13 0.77 0.10 NA 1.00 77.65 moco ResNet-50 True 1.32 0.57 0.93 0.00 NA 1.49 70.15 InfoMin ResNet-50 True 4.88 0.81 1.01 3.06 NA 4.88 72.29 PiRL ResNet-50 True 6.23 0.29 0.99 4.95 NA 6.23 60.56 InsDis ResNet-50 True 6.85 0.25 1.13 5.46 NA 6.85 58.30 simclrv2 r101 1x sk1 False 8.23 0.71 4.66 2.86 NA 8.23 76.07 InfoMin ResNet-50 False 10.21 2.34 8.96 0.00 NA 11.31 70.31 simclrv2 r152 1x sk0 False 10.32 1.12 6.93 2.26 NA 10.32 74.17 simclrv2 r101 1x sk0 False 10.53 1.11 6.99 2.42 NA 10.53 73.04 simclrv2 r50 1x sk0 False 10.62 0.99 7.31 2.31 NA 10.62 70.69 moco ResNet-50 False 10.72 1.82 7.86 1.04 NA 10.72 68.39 simclrv2 r152 2x sk0 False 10.92 0.75 7.45 2.72 NA 10.92 77.25 simclrv2 r101 2x sk0 False 11.02 0.74 7.51 2.78 NA 11.02 76.72 simclr ResNet50 1x False 11.07 1.22 7.73 2.13 NA 11.07 68.73 simclrv2 ResNet-50 False 11.16 0.64 7.67 2.85 NA 11.16 74.99 PiRL ResNet-50 False 11.43 1.49 8.26 1.68 NA 11.43 59.11 InsDis ResNet-50 False 12.02 1.40 8.52 2.10 NA 12.02 56.67 amdim ResNet-50 False 13.62 0.90 9.72 3.01 NA 13.62 67.69 CMC ResNet-50 False 14.73 2.30 12.30 0.13 NA 14.73 54.60 bigbigan ResNet-50 False 29.60 3.13 25.19 1.27 NA 29.60 50.24 Table H.3 – Summary of training methods with their hyper-parameters for CIFAR-10 Selfsupervised method Backbone Architectures Self-supervised Training Evaluation Simple Phase Optimization AMDIM AMDIM Encoder PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-18 ResNet-50 WideResNet-50 ResNet 101 MoCoV2 ResNet-18 PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-50 WideResNet-50 ResNet 101 SimCLR ResNet-18 Batchsize = 128 Epochs 200 Linear SGD Momentum = 0.9 Constant LR = 0.1 Weight decay 1e-6 ResNet-50 ResNet-50 Batchsize = 512Epochs 600 Table H.4 – Summary of training methods with their hyper-parameters for ImageNet Self-supervised method Backbone Architecture Pre-trained Model Evaluation Optimization Weight Decay Epochs Instance Discrimination ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 MoCo ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 PiRL ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 CMC ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 AMDIM AMDIM Encoder Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-3 40 BigBiGAN ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-5 40 SimCLRv1 ResNet-50 1x Official Linear SGDMomentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-50 4x SimCLRv2 ResNet-50 1x SK0 Official Linear SGD Momentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-101 2x SK0ResNet-152 2x SK0 ResNet-152 3x SK0
1. What is the main contribution of the paper regarding generalization gaps in self-supervised learning? 2. What are the strengths and weaknesses of the proposed RRM decomposition? 3. How does the reviewer assess the novelty and significance of the paper's contributions in isolation to self-supervised learning? 4. What are the limitations of the paper, particularly in its claims and comparisons with prior works? 5. How could the paper provide further insights into improving the generalization performance of certain algorithms or tasks using the RRM decomposition?
Review
Review The paper analyzes the generalization gap for self-supervised learning. This paper's contribution includes the proposal of decomposing the generalization bound into three terms: robustness, rationality, and memorization (RRM). The three terms explain the generalization gap with some different perspectives. With the RRM decomposition's help, it proves that since SSS doesn't memorize data, small robustness and small rationality gap will naturally guarantee a small generalization gap. Although I believe the RRM is novel and might bring some good insights for future studies, in isolation to self-supervised learning, this paper's main results (on the SSS part) seem to be stating something obvious in a fancy way. It is well understood that self-supervised learning has a small generalization gap, considering downstream tasks only learn from a function class with small capacity. For each T_pre, the generalization gap is guaranteed to be small with uniform convergence. The remark 3.2 doesn't really make sense to me. As for SSS, the T_pre is obtained with x sampled from the marginal distribution (with much more dataset), and T_fit is trained from (x,y) pairs generated from the joint distribution. T_pre is not supposed to see the same training examples, especially y. Therefore T_pre should not be memorizing the data samples. The prior work mentioned in this paper in understanding SSL targets to explain why the representation learned from pretext tasks can be useful for the downstream task, hence proving a small generalization error (instead of generalization gap), which is more important in understanding the success of self-supervised learning. And assumptions like (approximate) conditional independence were only to show a small approximation error and were not needed for proving a small generalization gap. Therefore removing these assumptions does not seem to be a real contribution here. It will be more interesting to me if the paper focuses more on the RRM decomposition and gives some further insights into how to use these terms in the future to improve the generalization performance of certain algorithms or tasks.
ICLR
Title For self-supervised learning, Rationality implies generalization, provably Abstract We prove a new upper bound on the generalization gap of classifiers that are obtained by first using self-supervision to learn a representation r of the training data, and then fitting a simple (e.g., linear) classifier g to the labels. Specifically, we show that (under the assumptions described below) the generalization gap of such classifiers tends to zero if C(g) n, where C(g) is an appropriately-defined measure of the simple classifier g’s complexity, and n is the number of training samples. We stress that our bound is independent of the complexity of the representation r. We do not make any structural or conditional-independence assumptions on the representation-learning task, which can use the same training dataset that is later used for classification. Rather, we assume that the training procedure satisfies certain natural noise-robustness (adding small amount of label noise causes small degradation in performance) and rationality (getting the wrong label is not better than getting no label at all) conditions that widely hold across many standard architectures. We also conduct an extensive empirical study of the generalization gap and the quantities used in our assumptions for a variety of self-supervision based algorithms, including SimCLR, AMDIM and BigBiGAN, on the CIFAR-10 and ImageNet datasets. We show that, unlike standard supervised classifiers, these algorithms display small generalization gap, and the bounds we prove on this gap are often non vacuous. 1 INTRODUCTION The current standard approach for classification is “end-to-end supervised learning” where one fits a complex (e.g., a deep neural network) classifier to the given training set (Tan & Le, 2019; He et al., 2016). However, modern classifiers are heavily over parameterized, and as demonstrated by Zhang et al. (2017), can fit 100% of their training set even when given random labels as inputs (in which case test performance is no better than chance). Hence, the training performance of such methods is by itself no indication of their performance on new unseen test points. In this work, we study a different class of supervised learning procedures that have recently attracted significant interest. These classifiers are obtained by: (i) performing pre-training with a selfsupervised task (i.e., without labels) to obtain a complex representation of the data points, and then (ii) fitting a simple (e.g., linear) classifier on the representation and the labels. Such “Self-Supervised + Simple” (SSS for short) algorithms are commonly used in natural language processing tasks (Devlin et al., 2018; Brown et al., 2020), and have recently found uses in other domains as well (Ravanelli et al., 2020; Liu et al., 2019). Compared to standard “end-to-end supervised learning”, SSS algorithms have several practical advantages. In particular, SSS algorithms can incorporate additional unlabeled data, the representation obtained can be useful for multiple downstream tasks, and they can have improved out-of-distribution performance (Hendrycks et al., 2019). Moreover, recent works show that even without additional unlabeled data, SSS algorithms can get close to state-of-art accuracy in several classification tasks (Chen et al., 2020b; He et al., 2020; Misra & Maaten, 2020; ∗Equal contribution. Email: {ybansal, galkaplun}@g.harvard.edu †Email: b@boazbarak.org. Tian et al., 2019). For instance, SimCLRv2 (Chen et al., 2020b) achieves 79.8% top-1 performance on ImageNet with a variant of ResNet-152, on par with the end-to-end supervised accuracy of this architecture at 80.5%. We show that SSS algorithms have another advantage over standard supervised learning—they often have a small generalization gap between their train and test accuracy, and we prove non-vacuous bounds on this gap. We stress that SSS algorithms use over-parameterized models to extract the representation, and reuse the same training data to learn a simple classifier on this representation. Thus, the final classifier they produce has high complexity by most standard measures, and it is by no means apriori evident that their generalization gap will be small. Our bound is obtained by first noting that the generalization gap of every training algorithm is bounded by the sum of three quantities, which we name the Robustness gap, Rationality gap, and Memorization gap (we call this the RRM bound, see Fact I). We now describe these gaps at a high level, deferring the formal definitions to Section 2. All three gaps involve comparison with a setting where we inject label noise by replacing a small fraction η of the labels with random values. The robustness gap corresponds to the amount by which training performance degrades by noise injection. That is, it equals the difference between the standard expected training accuracy (with no label noise) and the expected training accuracy in the noisy setting; in both cases, we measure accuracy with respect to the original (uncorrupted) labels. The robustness gap is nearly always small, and sometimes provably so (see Section 3). The rationality gap corresponds to the difference between performance on the noisy training samples (on which the training algorithm gets the wrong label) and test samples (on which it doesn’t get any label at all), again with respect to uncorrupted labels. An optimal Bayesian procedure would have zero rationality gap, and indeed this gap is typically zero or small in practice. Since it is a nonstandard quantity, We discuss the rationality gap in Section 3.1, and explain assuming it is small is both well-founded and does not trivialize the question of generalization. The memorization gap, which often accounts for the lion’s share of the generalization gap, corresponds to the difference in the noisy experiment between the training accuracy on the entire train set and the training accuracy on the samples that received the wrong label (both measured with respect to uncorrupted labels). The memorization gap can be thought of as quantifying the extent to which the classifier can “memorize” noisy labels, or act differently on the noisy points compared to the overall train set. The memorization gap is large in standard “end-to-end supervised training”. In contrast, our main theoretical result is that for SSS algorithms, the memorization gap is small if the simple classifier has small complexity, independently of the complexity of the representation. As long as the simple classifier is under-parameterized (i.e., its complexity is asymptotically smaller than the sample size), our bound on the memorization gap tends to zero. When combined with small rationality and robustness, we get concrete non-vacuous generalization bounds for various SSS algorithms on the CIFAR-10 and ImageNet datasets (see Figures 1 and 4). In a nutshell, our results are the following: Theoretical contributions. 1. Our main theoretical result (Theorem II) is that the memorization gap of an SSS algorithm is bounded byO( √ C/n) whereC is the complexity of the simple classifier produced in the “simple fit” stage. This bound is oblivious to the complexity of the representation produced in the pre-training and does not make any assumptions on the relationship between the representation learning method and the supervised learning task. One way to interpret this result is that we give a rigorous bound on the generalization gap of SSS algorithms, under the assumptions that the robustness and rationality gaps are bounded by some small constant (e.g., 5%). As mentioned below, these assumptions hold widely in practice across many different classifiers. Moreover, these assumptions are nontrivial and do not “assume away the difficulty”. Indeed, there are many natural examples of training algorithms for which these assumptions hold but the generalization gap is large. Last, making some assumptions is necessary for a generalization bound to hold for SSS algorithms; see Remark 3.1 and Appendix E. 2. We also give a theoretical justification for the assumption of a small rationality gap, by proving that a positive rationality gap corresponds to “leaving performance on the table”, in the sense that we can transform a learning procedure with a large rationality gap into a procedure with better test performance (Theorem 3.2). Empirical contributions. We complement the theoretical results above with an extensive empirical study of several SSS and end-to-end algorithms on both the CIFAR-10 and ImageNet datasets. 1. We study several top-performing SSS architectures, and show that they all exhibit relatively small generalization gaps on both CIFAR-10 and ImageNet. We stress that we consider the case where the same data is used for both representation learning and classification, and hence it is by no means a-priori obvious that these algorithms should have small generalization gaps. See Figures 1 and 4 for sample results and Section 4 for more details. 2. We also show that the results of Zhang et al. (2017) do not replicate to SSS algorithms, in the sense that such algorithms, despite using an over-parameterized representation, are not able to fit random label noise. 3. We understake an empirical study of the robustness, rationality, and memorization gaps for both SSS and end-to-end supervised learning algorithms. We show that the robustness and rationality gaps are small for all these algorithms, while the memorization gap is small for SSS algorithms but can be large for end-to-end supervised learning. We show that the RRM bound is typically non-vacuous, and in fact, often close to tight, for a variety of SSS algorithms on the CIFAR-10 and ImageNet datasets, including SimCLR (which achieves test errors close to its supervised counterparts). 4. We demonstrate that replacing the memorization gap with the upper bound of Theorem II yields a non-vacuous generalization bound for a variety of SSS algorithms on CIFAR-10 and ImageNet. Moreover, this bound gets tighter with more data augmentation. Related Work. There are many works on generalization bounds for supervised learning (e.g., Golowich et al. (2018); Neyshabur et al. (2017); Bartlett et al. (2017); Dziugaite & Roy (2017); Neyshabur et al. (2018); Cao & Gu (2019), and references therein). The related work section of Arora et al. (2019) contains an extensive discussion of such bounds, and why more often than not the assumptions used do not hold in practice. Indeed, many such bounds give vacuous guarantees for modern architectures (such as the ones considered in this paper) that have the capacity to memorize their entire training set (Zhang et al., 2017). Some non-vacuous bounds are known; e.g., Zhou et al. (2019) gave a 96.5% bound on the error of MobileNet on ImageNet. Belkin et al. (2019); Nagarajan & Kolter (2019) showed some barriers for generalization gaps for standard end-to-end supervised learning. Similarly, standard approaches such as Rademacher complexity cannot directly bound SSS algorithms’ generalization gap(see Remark 3.1). Recently, Saunshi et al. (2019) and Lee et al. (2020) gave generalization bounds for self-supervised based classifiers. The two works considered special cases of SSS algorithms, such as contrastive learning and pre-text tasks. Both works make strong statistical assumptions of (exact or approximate) conditional independence relating the pre-training and classification tasks. For example, if the pre-training task is obtained by splitting a given image x into two pieces (x1, x2) and predicting x2 from x1, then Lee et al. (2020)’s results require x1 and x2 to be approximately independent conditioned on their class y. However, in many realistic cases, the two parts of the same image will share a significant amount of information not explained by the label. Our work applies to general SSS algorithms without such statistical assumptions, at the expense of assuming bounds on the robustness and rationality gaps. There have been works providing rigorous bounds on the robustness gap or related quantities (See Section 3.). However, as far as we know, the rationality gap has not been explicitly defined or studied before. We provide a brief exposition of the various types of SSS methods in Section 4, and a more detailed discussion in Appendix D.1. Paper Organization. Section 2 contains formal definitions and statements of our results. Section 3 provides an overview of prior work and our new results on the three gaps of the RRM bound. In Section 4, we describe our experimental setup and detail our empirical results. Section 5 concludes the paper and discusses important open questions. We defer proofs and additional experimental results to the appendix. Appendix B contains the proof of Theorem II, while Appendix C contains the proof of Theorem 3.2. Appendix D fully details our experimental setup.1 Notation. We use capital letters (e.g., X) for random variables, lower case letters (e.g., x) for a single value, and bold font (e.g., x) for tuples (which will typically have dimension corresponding to the number of samples, denoted by n). We use xi for the i-th element of the tuple x. We use calligraphic letters (e.g., X ,D) for both sets and distributions. 2 FORMAL STATEMENT OF RESULTS A training procedure is a (possibly randomized) algorithm T that takes as input a train set (x,y) = (xi, yi)i∈[n] ∈ (X×Y)n and outputs a classifier f : X → Y . For our current discussion, we make no assumptions on the type of classifier output or the way that it is computed. We denote the distribution over training sets in (X ×Y)n byDtrain and the distribution over test samples in X ×Y byDtest.2 The generalization gap of a training algorithm T with respect to a distribution pair D = (Dtrain,Dtest) is the expected difference between its train accuracy (which we denote by TrainD,T ) and its test performance (which we denote by TestD,T ). We will often drop subscripts such as D, T when they can be inferred from the context. We will also consider the η-noisy experiment, which involves computing the classifier f̃ = T (x, ỹ) where ỹi = yi with probability 1 − η and is uniform over Y otherwise. Our starting point is the following observation which we call the RRM bound (for Robustness, Rationality, and Memorization). The quantities appearing in it are defined in Table 1 and discussed more in depth in Section 3. Fact I (RRM bound). For every noise parameter η > 0, training procedure T and distribution D = (Dtrain,Dtest) over training sets and test samples, the RRM bound with respect to T and D is, 1We provide our code and data in an anonymous repository on: http://github.com/ICLR2021-rep-gen/. 2The train and test data often stem from the same distribution (i.e., Dtrain = Dntest), but not always (e.g., it does not hold if we use data augmentation). Dtest enters the RRM bound only via the rationality gap, so the assumption of small rationality may be affected if Dtrain 6= Dntest, but the RRM bound still holds. Train− Test︸ ︷︷ ︸ Generalization gap ≤ [ Train− Train(η) ] +︸ ︷︷ ︸ Robustnessgap + [ NTrain(η)− Test ] +︸ ︷︷ ︸ Rationality gap + [ Train(η)− NTrain(η) ] +︸ ︷︷ ︸ Memorization gap where we denote x+ = max(x, 0). The RRM bound is but an observation, as it directly follows from the fact that x+ ≥ x for every x. However, it is a very useful one. As mentioned above, for natural algorithms, we expect both the robustness and rationality components of this gap to be small, and hence the most significant component is the memorization gap. Our main theoretical result is a bound on this gap: Theorem II (Memorization gap bound). Let T = (Tpre, Tfit) be an SSS training procedure obtained by first training Tpre on x ∈ Xn to get a representation r : X → R and then training Tfit on (r(x),y) for y ∈ Yn to obtain a classifier g : R → Y , with the final classifier f : X → Y defined as f(x) = g(r(x)). Then, for every noise parameter η > 0 and distribution D over Xn × Yn: Memorization gap(T ) = TrainT,D(η)− NTrainT,D(η) ≤ O( √ Cη(Tfit) n · 1 η ) where Cη(Tfit) is a complexity measure of the second phase training procedure, which in particular is upper bounded by the number of bits required to describe the classifier g (See Definition 2.3.). 2.1 COMPLEXITY MEASURES We now define three complexity measures, all of which can be plugged in as the measure in Theorem II. The first one, Cmdl, is the minimum description length of a classifier in bits. At a first reading, the reader can feel free to skip the description of the other two measures Cpc and Cdc. These are superficially similar to Rademacher Complexity (cf. Bartlett & Mendelson (2002)) in the sense that they capture the ability of the hypothesis to correlate with random noise but crucially depend on the algorithm used rather than the class of concepts (see Remark 3.1). Definition 2.3 (Complexity of training procedures). Let T be a training procedure taking as input a set (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y), we define the following three complexity measures with respect to r,y, η: • The minimum description length of T is defined as Cmdlr,y,η(T ) := H(g) where we consider the model g as a random variable arising in the η-noisy experiment.3 • The prediction complexity of T is defined as Cpcr,y,η(T ) := ∑n i=1 I(g(ri); ỹi) where the ỹi’s are the labels obtained in the η-noisy experiment. • The (unconditional) deviation complexity of T is defined as Cdcr,y,η(T ) := n · I(g(ri) − yi ; ỹi − yi) where the random variables above are taken over i ∼ [n] and subtraction is done modulo |Y|, identifying Y with the set {0, . . . , |Y| − 1}. 3The name “minimum description length” is justified by the operational definition of entropy relating it to the minimum amortized length of a prefix-free encoding of a random variable. Conditioned on y and the choice of the index i, the deviations g(ri)− yi and ỹi − yi determine the predictions g(ri) and noisy labels ỹi, and vice versa. Hence we can think of Cdc as an “averaged” variant of Cpc, where we make the choice of the index i part of the sample space for the random variables. While we expect the two measures to be approximately close, the fact that Cdc takes i into the sample space makes it easier to estimate this quantity in practice without using a large number of executions (See Figure D.2 for convergence rates.). The measure Cmdl is harder to evaluate in practice, as it requires finding the optimal compression scheme for the classifier. Appendix B contains the full proof of Theorem II. It is obtained by showing that: (i) for every r,y, η, and T it holds that Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdlr,y,η(T ), and (ii) for every SSS algorithm T = (Tpre, Tfit) and distribution D = (Dtrain,Dtest), the memorization gap of T is at most√ CdcTpre(x),y,η(Tfit) / ( η √ 2n ) . (1) It is the quantity (1) that we compute in our experiments. 3 THE THREE GAPS We now briefly describe what is known and what we prove about the three components of the RRM bound. We provide some additional discussions in Appendix E, including “counter-examples” of algorithms that exhibit large values for each one of these gaps. The robustness gap. The robustness gap measures the decrease in training accuracy from adding η noisy labels, measured with respect to the clean labels. The robustness gap and related notions such as noise stability or tolerance have been studied in various works (cf. Frénay & Verleysen (2013); Manwani & Sastry (2013)). Interpolating classifiers (with zero train error) satisfy Train(η) ≥ 1 − η and hence their robustness gap is at most η (See left panel of Figure 2). In SSS algorithms, since the representation is learned without using labels, the injection of label noise only affects the simple classifier, which is often linear. Robustness guarantees for linear classifiers have been given previously by Rudin (2005). While proving robustness bounds is not the focus of this paper, we note in the appendix some simple bounds for least-squares minimization of linear classifiers and the (potentially inefficient) Empirical Risk Minimization algorithm (see Appendices F and G). Empirically, we observe that the robustness gap of SSS algorithms is often significantly smaller than η. (See left panels of Figure 2 and Figure 3.) The memorization gap. The memorization gap corresponds to the algorithm’s ability to fit the noise (i.e., the gap increases with the number of fit noisy labels). If, for example, the classifier output is interpolating, i.e., it satisfies f(xi) = ỹi for every i, then accuracy over the noisy samples will be 0 (since for them yi 6= ỹi). In contrast, the overall accuracy will be in expectation at least 1−η which means that the memorization gap will be≈ 1 for small η. However, we show empirically (see right panels of Figures 2 and 3) that the memorization gap is small for many SSS algorithms and prove a bound on it in Theorem II. When combined with small rationality and robustness, this bound results in non-vacuous generalization bounds for various real settings (e.g., 48% for ResNet101 with SimCLRv2 on ImageNet, and as low as 4% for MoCo V2 with ResNet-18 on CIFAR-10). Moreover, unlike other generalization bounds, our bound decreases with data augmentation (See Figure 5.). Remark 3.1 (Memorization vs. Rademacher complexity). The memorization gap, as well the complexity measures defined in Section 2.1 have a superficial similarity to Rademacher complexity (Bartlett & Mendelson, 2002), in the sense that they quantify the ability of the output classifier to fit noise. One difference is that Rademacher complexity is defined with respect to 100% noise, while we consider the η-noisy experiment for small η. A more fundamental difference is that Rademacher complexity is defined via a supremum over all classifiers in some class. The final classifiers of SSS algorithms are obtained by a composition of the complex representation and simple classifier. This composed classifier will in general have high Radamacher complexity, and in particular we would not be able to prove non-vacuous bounds on it using Radamacher complexity. We cannot ignore the complexity of the representation in Radamacher-complexity based analysis of SSS algorithms since the representation is learned using the same data that is later used for classification. In fact, there are examples of SSS algorithms with simple classifiers that have large generalization gaps (see Section 3.1). This shows that Radamacher complexity bounds for the class of simple classifiers cannot, on their own, be used to derive generalization bounds. Zhang et al. (2017) demonstrated a lower bound on the Radamacher complexity of modern deep networks, by showing that modern end-to-end supervised learning algorithm can fit 100% of their label noise. Our experiments show that this is not the case for SSS algorithms, which can only fit 15%-25% of the CIFAR-10 training set when the labels are completely random (See Table D.1 in the appendix.). However, absence of evidence is not evidence of absence, and the fact that empirically SSS algorithms do not fit the noise, does not imply that the Radamacher complexity of the resulting class is small, nor does it, by its own, automatically imply a small generalization gap. 3.1 THE RATIONALITY GAP Unlike the other quantities defined above, the rationality gap is novel and less intuitive, and so we discuss it more in depth. The rationality gap, like all other quantities in the RRM bound, applies to any learning procedure and not only to SSS algorithms. Indeed, our empirical results show that rationality is typically small for both SSS and end-to-end algorithms, and so it is not this gap but rather the memorization gap that accounts for the difference in their generalization behavior. To build intuition for the rationality gap, consider an example of a training procedure T that on input a train set S, has 70% test accuracy and a 10% rationality gap with noise parameter η = 5%. In the η-noisy experiment, the classifier f̃ output by T recovers the original uncorrupted label for 80% of the ≈ η ·n datapoints for which it received the wrong labels. In contrast, 10% rationality gap means the same classifier will only succeed in recovering the label of 70% of unseen test samples. Intuitively, such a classifier is being “irrational” or “inconsistent” in the sense that it succeeds better on datapoints on which it was given the wrong label, then on datapoints on which it was given no label at all. (In error-correcting code parlance, it handles corruption errors better than erasure errors.) We can turn this intuition into a formal argument, by giving a transformation from such a training algorithm T to an algorithm T ′ that achieves roughly 80% test accuracy. On input a fresh unseen datapoint x, the algorithm T ′ chooses a random label ỹ ∼ Y , runs T on the train set S ∪ {(x, ỹ)} to obtain some classifier f̃ , and outputs f̃(x). Up to low-order terms, T ′ will achieve test accuracy at least as good as the performance of T on noisy datapoints, which is 80%. The above reasoning leads to the proof of the following theorem (see also Appendix C): Theorem 3.2 (Performance on the table theorem, informal). For every training procedure T and distribution Dtest, Dtrain = Dntest, there exists a training procedure T ′ satisfying TestT ′ ≥ TestT + rationality gap(T )− robustness gap(T )− o(1). Why do natural algorithms have a small rationality gap? Empirically, the rationality gap is often small or zero for both SSS and end-to-end supervised learning algorithms, particularly for betterperforming ones. (See middle panels of Figure 2 and Figure 3.) Theorem 3.2 provides an “economic explanation” for this phenomenon: a rational agent would not use a classifier with a positive rationality gap since this amounts to “leaving performance on the table”. However, this transformation comes at a high computational cost; inference for the classifier produced by T ′ is as expensive as retraining from scratch. Hence Theorem 3.2 does not fully explain why natural algorithms tend to have small rationality gap. In this paper we take low rationality gap as an empirically-justifiable assumption. We believe that both proving that natural algorithms have small rationality gaps, as well as coming up with computationally efficient transformations to extract performance from rationality gaps, are important open questions. Does assuming small rationality gap trivialize generalization? Since the definition of the rationality gap involves the test accuracy, the reader might wonder if assuming small rationality is not tantamount to assuming a small generalization gap. However, there is nothing “irrational” about a large generalization gap, and indeed many excellent classifiers have 100% train accuracy. In contrast, it is irrational to “leave performance on the table” and use a classifier with test accuracy pwhen it can be transformed into one with significantly better accuracy. Concretely, our empirical studies show that the rationality gap is uniformly small, even for end-to-end classifiers that have large generalization gaps. Hence, by itself, rationality is not enough to guarantee small generalization gap. Is assuming small rationality gap even needed? Since SSS algorithms use simple classifiers, the reader may wonder why we need the small-rationality gap assumption and cannot directly prove generalization bounds using standard tools such as Rademacher complexity. The issue is that the representation used by SSS algorithms is still sufficiently over-parameterized to allow memorizing the training set. As a pedagogical example, consider a representation-learning procedure that maps a label-free training set x to a representation r : X → R under which the differently labeled x’s are linearly separable. Moreover, suppose that the representation space has dimension much smaller than n, and hence a linear classifier would have small complexity under any reasonable measure. Without access to the labels, we can transform r to a representation r′ that on input x outputs r(x) if x is in the training set, and outputs the all-zero vector (or another trivial value) otherwise. Given sufficiently many parameters, the representation r′ (or a close-enough approximation) can be implemented by a neural network. Since r and r′ are identical on the training set, a learning procedure using r′ will have the same train accuracy and (small) memorization gap. However, the generalization gap of such a procedure will be large, since it will not achieve better than trivial accuracy on unseen test examples. The issue here is not that the representation “memorizes” the train set. Representations of practical SSS algorithms are highly over-parameterized and are quite likely to memorize specific aspects of the training set. Rather, the issue is that the representation artificially behaves differently on test points in a way that decreases its performance. It is the latter property that makes the classifier “irrational”, and violates the small rationality gap assumption. 4 EMPIRICAL STUDY OF THE RRM BOUND In support of our theoretical results, we conduct an extensive empirical study of the three gaps and empirically evaluate the bound from Equation (1) for a variety of SSS algorithms for the CIFAR10 and ImageNet datasets. We provide a summary of our setup and findings below. For a full description of the algorithms and hyperparameters, see Appendix D. SSS Algorithms (Tpre, Tfit). We consider various self-supervised training algorithms that learn a representation without explicit training labels. In our study, we include methods based on contrastive learning such as Instance Discrimination (Wu et al., 2018), MoCoV2 (He et al., 2020), SimCLR (Chen et al., 2020a;b), AMDIM (Bachman et al., 2019), CMC (Tian et al., 2019), InfoMin (Tian et al., 2020) as well as adversarial methods such as BigBiGAN (Donahue & Simonyan, 2019). For the second phase of training (also known as the evaluation phase (Goyal et al., 2019)), we consider simple models such as regularized linear regression, or small Multi-Layer Perceptrons (MLPs). For each evaluation method, we run two experiments: 1) the clean experiment where we train Tfit on the data and labels (x,y); 2) the η-noisy experiment where we train Tfit on (x, ỹ) where ỹ are the η noised labels. Unless specified otherwise we set the noise to η = 5%. Adding augmentations. We investigate the effect of data augmentation on the three gaps and the theoretical bound. For each training point, we sample t random augmentations (t = 10 unless stated otherwise) and add it to the train set. Note that in the noisy experiment two augmented samples of the same original point might be assigned with different labels. We use the same augmentation used in the corresponding self-supervised training phase. Results. Figures 1 and 2 provide a summary of our experimental results for CIFAR-10. The robustness and rationality gaps are close to zero for most SSS algorithms, while the memorization gap is usually the dominant term, especially so for models with larger generalization gap. Moreover, we see that Cdc often produces a reasonably tight bound for the memorization gap, leading to a generalization bound that can be as low as 5-10%. In Figures 3 and 4 we give a summary of our experimental results for SSS algorithms on ImageNet. Again, the rationality and robustness gaps are bounded by small constants. Notice, that adding augmentations reduces memorization, but may lead to an increase in the rationality gap. This is also demonstrated in Figure 5 where we vary the number of data augmentations systematically for one SSS algorithm (AMDIM) on CIFAR-10. Since computing the Theorem II bound for ImageNet is computationally expensive (See Appendix D.5.1.) we compute it only for two algorithms, which achieve a non-vacuous generalization bound of 48%. 5 CONCLUSIONS AND OPEN QUESTIONS This work demonstrates that SSS algorithms have small generalization gaps. While our focus is on the memorization gap, our work motivates more investigation of both the robustness and rationality gaps. In particular, we are not aware of any rigorous bounds for the rationality gap of SSS algorithms, but we view our “performance on the table” theorem (Theorem 3.2) as a strong indication that it is close to zero for natural algorithms. Given our empirical studies, we believe the assumptions of small robustness and rationality conform well to practice. Our numerical bounds are still far from tight, especially for ImageNet, where evaluating the bound (more so with augmentations) is computationally expensive. Nevertheless, we find it striking that already in this initial work, we get non-vacuous (and sometimes quite good) bounds. Furthermore, the fact that the empirical RRM bound is often close to the generalization gap, shows that there is significant room for improvement. Overall, this work can be viewed as additional evidence for the advantages of SSS algorithms over end-to-end supervised learning. Moreover, some (very preliminary) evidence shows that end-toend supervised learning implicitly separates into a representation learning and classification phases (Morcos et al., 2018). Understanding the extent that supervised learning algorithms implicitly perform SSS learning is an important research direction in its own right. To the extent this holds, our work might shed light on such algorithms’ generalization performance as well. 6 ACKNOWLEDGEMENTS We thank Dimitris Kalimeris, Preetum Nakkiran, and Eran Malach for comments on early drafts of this work. This work supported in part by NSF award CCF 1565264, IIS 1409097, DARPA grant W911NF2010021, and a Simons Investigator Fellowship. We also thank Oracle and Microsoft for grants used for computational resources. Y.B is partially supported by MIT-IBM Watson AI Lab. Work partially performed while G.K. was an intern at Google Research. A MUTUAL INFORMATION FACTS Lemma A.1 . If A,B are two Bernoulli random variables with nonzero expectation then |E[A|B = 1]− E[A]| ≤ √ 1 2I(A;B)/E[B] Proof. A standard relation between mutual information and KL-divergence gives I(A;B) = DKL(pA,B ||pApB). On the other hand, by the Pinsker inequality, sup S⊆{0,1}×{0,1} |pA,B(S)− pA×B(S)| ≤ √ 1 2 DKL(pA,B ||pApB) = √ 1 2 I(A,B). Thus (letting S = {(1, 1)}), |Pr[A = 1, B = 1]−Pr[A = 1]Pr[B = 1]| ≤ √ 1 2I(A,B). Consequently, |E[A|B = 1]− E[A]| ≤ √ 1 2I(A,B))/E(B) Lemma A.2 . For three random variables W,X, Y , s.t. X and Y are independent, I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ) Proof. Using the chain rule for mutual information we have: I(W ;X,Y ) = I(W ;X) + I(W ;Y |X) Since X,Y are independent, H(Y |X) = H(Y ) and since conditioning only reduces entropy, we have H(Y |W,X) ≤ H(Y |W ). Combining the two we get, I(W ;Y |X) = H(Y |X)−H(Y |W,X) ≥ H(Y )−H(Y |W ) = I(W ;Y ) Thus we have that I(W ;X,Y ) ≥ I(W ;X) + I(W ;Y ). Note that by induction we can extend this argument to show that I(W ;X1, ..., Xn) ≥ ∑ I(W ;Xi) where Xi are mutually independent. B SIMPLE CLASSIFIERS IMPLY SMALL MEMORIZATION GAP In this appendix we we prove our main theoretical result (Theorem B.4). We will start by giving a formal definition of SSS algorithms and restating the definition of our complexity measures. Definition B.1 (SSS Algorithms, restated). An SSS algorithm over (X × Y)n is a procedure T = (Tpre, Tfit) that takes as input a set (x,y) and operates as follows: 1. Tpre takes the (label free) data points x as input and outputs a representation r : X → R for some setR; 2. On input the points {(r(xi), yi)}ni=1, Tfit outputs a simple classifier g : R :→ Y; 3. The output is a classifier f : X → Y defined as f(x) = g(r(x)) for every x ∈ X . We now restate the definitions of our complexity measure. Definition B.2 (Complexity of training procedures, restated). Let T be a training procedure taking as input (r,y) = {(ri, yi)}ni=1 ∈ (R × Y)n and outputting a classifier g : r → Y and let η > 0. For every training set (r,y): • The minimum description length of T with respect to r,y, η is defined as Cmdlr,y,η(T ) = H(g) where g is the random variable T (r, ỹ) in the η noisy experiment. • The prediction complexity of T with respect to r,y, η is defined as, Cpcr,y,η(T ) := n∑ i=1 I(g(ri); ỹi) where g(ri) and ỹi are viewed as random variables over the sample space induced by choosing ỹ according to the η-noisy experiment w.r.t. y and letting g = T (x, ỹ). • The deviation complexity of T with respect to r,y, η is defined as Cdcr,y,η(T ) := n · I(∆;N) where ∆ = g(ri) − yi (mod |Y|) and N = ỹi − yi (mod |Y|) are random variables taken over both the above sample space and the choice of i ∼ [n] and identifying Y with {0, . . . , |Y| − 1}. The following theorem shows that Cdc is upper bounded by Cpc, which in turn is bounded by the operational entropy of g. Theorem B.3 (Relation of complexity measures). For every r,y, η > 0, and T Cdcr,y,η(T ) ≤ Cpcr,y,η(T ) ≤ Cmdl(T ) where g is the classifier output by T (considered as a random variable). Proof. Fix T, r,y, η. We get ỹ by choosing i.i.d random variables N1, . . . , Nn, each equalling 0 with probability 1− η and uniform otherwise, and letting ỹi = yi +Ni (mod |Y|). We start by proving the second inequality Cpcr,y,η(T ) ≤ H(g). Let g = T (r, ỹ) and define p = (g(r1), . . . , g(rn)) be the vector of predictions. Then, Cpcr,y,η(T ) = ∑ i I(pi; ỹi) = ∑ i I(pi;Ni) (2) with the last equality holding since for fixed yi, Ni determines ỹi and vice versa. However, since the full vector p contains only more information than pi, the right-hand side of (2) is at most∑n i=1 I(p;Ni) ≤ I(p ; N1, . . . , Nn), using the fact that Ni random variables are independent (see Lemma A.2). For a fixed r, the value of p is completely determined by g and hence the entropy of p is at most H(g), establishing the second inequality of the theorem. We now turn to the first inequality Cdcr,y,η(T ) ≤ Cpcr,y,η(T ). Let ∆i = pi − yi (mod |Y|). Then, 1 nC pc r,y,η(T ) = E j∼[n] I(pj ;Nj) = E j∼[n] I(∆j ;Nj) (3) since pi determines ∆i and vice versa. But, since Nj = N |i = j and ∆j = ∆|i = j (where N,∆ are the random variables defined in Definition B.2), the right-hand side of (3) equals E j∼[n] I(∆;N |i = j) = E j∼[n] H(N |i = j)−H(N |∆, i = j) . (4) Since N1, . . . , Nn are identically distributed, H(N |i = j) = H(N) which means that the righthand side of (4) equals H(N)− E j∼[n] H(N |∆, i = j) ≥ H(N)−H(N |∆) = I(∆;N) with the inequality holding since on average conditioning reduces entropy. By definition I(∆;N) = 1 nC dc r,y,η(T ), establishing what we wanted to prove. The complexity measures Cpc and Cdc are defined with respect to a fixed train set (r,y), rendering them applicable for single training sets such as CIFAR-10 and ImageNet that arise in practice. If D is a distribution over (r,y), then we define the complexity measures Cpc and Cdc with respect toD as the average of the corresponding measure with respect to (r,y) ∼ D. We now restate Theorem II: Theorem B.4 (Theorem II, restated). Let T = (Tpre, Tfit) be a training procedure obtained by first training Tpre on x ∈ Xn to obtain a representation r : X → R and then training Tfit on (r(x),y)) where y ∈ Yn to obtain a classifier g : R → Y . Then, for every noise parameter η > 0 and distribution Dtrain over (X ,Y)n, Memorization gap(T ) = TrainDtrain,T (η)− NTrainDtrain,T (η) ≤ √ CdcDr,η(Tfit) 2n · 1 η where Dr is the distribution over (R×Y)n induced by Tpre on Dtrain. Note that the bound on the right-hand side is expressed only in terms of the complexity of the second stage Tfit and is independent of the complexity of Tpre. The crux of the proof is showing (close to) independence between the corrupted indices and prediction deviation of g resulting from the noise. Proof. Let (r,y) be sampled by first drawing (x,y) ∼ Dtrain over (X×Y)n then applying r = r(x) where r = Tpre(x). Consider the sample space of sampling ỹ according to the η-noisy distribution with respect to Y , computing g = Tfit(r, ỹ), and sampling i ∼ [n]. We define the following two Bernoulli random variables over this sample space: Z = 1∆=0 = { 1 g(Ri) = yi 0 otherwise ; B = 1N 6=0 = { 1 ỹi 6= yi 0 otherwise . For a given r,y, since Z is determined by ∆ and B is determined by N , I(Z;B) ≤ I(∆;N) = Cdcr,y,η(Tfit)/n. By Lemma A.1, for every Bernoulli random variables B,Z |E[Z]− E[Z|B = 1]| ≤ √ 1 2I(Z;B)/E[B] And hence in our case (since E[B] = η), E[Z]− E[Z|B = 1] ≤ √ Cdcr,y,η(Tfit) 2n · 1 η . But E[Z] corresponds to the probability that g(r) = y for (r, y) in the train set, while E[Z|B = 1] corresponds to this probability over the noisy samples. Hence the memorization gap is bounded by E (r,y)∼Dr [√ Cdcr,y,η(Tfit) 2n · 1 η ] ≤ 1η √ E (r,y)∼Dr [ Cdcr,y,η(Tfit) 2n ] = √ CdcR,η(Tfit) 2n · 1 η using the Jensen inequality and the concavity of square root for the first inequality. C POSITIVE RATIONALITY GAP LEAVES ROOM FOR IMPROVEMENT In this appendix, we prove the “performance on the table theorem” that states that we can always transform a robust training procedure with a positive rationality gap into a training procedure with better performance: Theorem C.1 (Performance on the table theorem, restated). For every training procedure T and Dtest, n, η, if Dtrain = Dntest there exists a training procedure S such that TestS,D,n ≥ NTrainT,D,n(η)− o(1) (5) where o(1) is a term that vanishes with n, and under the assumption that TrainT,D,n(η) ≥ NTrainT,D,n(η). For any reasonable training procedure T , performance on noisy train samples will not be better than the overall train accuracy, and hence the assumption will be satisfied. In particular (since we can always add noise to our data), the above means that we can obtain a procedure S′ whose clean test performance is at least TestT + ∆ where ∆ = NTrainT (η)− TestT is the rationality gap of T . Hence if the rationality gap is larger than the robustness gap, we can use the above to improve the test performance of “irrational” networks. (Note that the robustness gap of almost all standard training procedure is at most η and in fact often much smaller.) We stress that the procedure of Theorem 3.2, while running in “polynomial time”, is not particularly practical, since it makes inference be as computationally expensive as training. However, it is a proof of concept that irrational networks are, to some extent, “leaving performance on the table”. Proof. Let T be a procedure as above. Our algorithm S would be the following: • Training: The algorithm will not do any training but on input labelsD = {(xi, ỹi)} simply stores these labels. • Inference: On input a data point x, Algorithm S will choose i ∈ [n] at random, and run T on the data D replacing the i-th sample with (x, ỹ) where ỹ is chosen uniformly at random. The output is f(x) where f is the classifier output by T First note that while the number of noisy samples could change by one by replacing (xi, yi) with (x, ỹ), since this number is distributed according to the Binomial distribution with mean ηn and standard deviation √ (1− η)ηn 1, this change can affect probabilities by at most o(1) additive factor. If Y has k classes, then with probability 1− 1/k we will make (x, ỹ) noisy (y 6= ỹ) in which case the expected performance on it will be NTrainT (η). With probability 1/k, we choose the correct label y in which case performance on this sample will be equal to the expected performance on clean samples which by our assumptions is at least NTrainT (η) as well. D EXPERIMENTAL DETAILS We perform an empirical study of the RRM bound for a wide variety of self-supervised training methods on the ImageNet (Deng et al., 2009) and CIFAR-10 (Krizhevsky et al., 2009) training datasets. We provide a brief description of all the self-supervised training methods that appear in our results below. For each method, we use the official pre-trained models on ImageNet wherever available. Since very few methods provide pre-trained models for CIFAR-10, we train models from scratch. The architectures and other training hyper-parameters are summarized in Table H.4 and Table H.3. Since our primary aim is to study the RRM bound, we do not optimize for reaching the state-of-the-art performance in our re-implementations. For the second phase of training, we use L2-regularized linear regression, or small non-interpolating Multi-layer perceptrons (MLPs). D.1 SELF-SUPERVISED TRAINING METHODS (TPRE) There are a variety of self-supervised training methods for learning representations without explicit labels. The two chief classes of self-supervised learning methods are: 1. Contrastive learning: These methods seek to find an embedding of the dataset that pushes a positive pair of images close together and a pair of negative images far from each other. For example, two different augmented versions of the same image may be considered a positive pair, while two different images may be considered a negative pair. Different methods such as Instance Discrimination, MoCo, SimCLR, AMDIM, differ in the the way they select the positive/negative pairs, as well other details like the use of a memory bank or the encoder architecture. (See Falcon & Cho (2020) for detailed comparison of these methods). 2. Handcrafted pretext tasks: These methods learn a representation by designing a fairly general supervised task, and utilizing the penultimate or other intermediate layers of this network as the representation. Pretext tasks include a variety of methods such as predicting the rotation angle of an input image (Gidaris et al., 2018), solving jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016), denoising images (Vincent et al., 2008) or image inpainting (Pathak et al., 2016). Additionally, adversarial image generation can be used for by augmenting a the image generator with an encoder (Donahue & Simonyan, 2019). We focus primarily on contrastive learning methods since they achieve state-of-the-art performance. We now describe these methods briefly. Instance Discrimination: (Wu et al., 2018) In essence, Instance Discrimination performs supervised learning with each training sample as a separate class. They minimize the non-parametric softmax loss given below for the training dataset. J(θ) = − n∑ i=1 log ( exp(vTi v/τ)∑n j=1 exp(v T i v/τ) ) (6) where vi = fθ(xi) is the feature vector for the i-th example. They use memory banks and a contrastive loss (also known as Noise Contrastive Estimation or NCE (Gutmann & Hyvärinen, 2010)) for computing this loss efficiently for large datasets. So in this case, a positive pair is an image and itself, while a negative pair is two different training images. Momentum Contrastive (MoCo): (He et al., 2020) MoCo replaces the memory bank in Instance Discrimination with a momentum-based query encoder. MoCoV2 (Chen et al., 2020c) uses various modifications from SimCLR, like a projection head, and combines it with the MoCo framework for improved performance. AMDIM: (Bachman et al., 2019) AMDIM uses two augmented versions of the same image. For these augmentations, they use random resized crops, random jitters in color space, random horizontal flip and random conversion to grayscale. They apply the NCE loss across multiple scales, by using features from multiple layers. They use a modified ResNet by changing the receptive fields to decrease overlap between positive pairs. CMC: (Tian et al., 2019) CMC creates two views for contrastive learning by converting each image into the Lab color space. L and ab channels from the same image are considered to be a positive pair, while those from two different images are considered to be a negative pair. PiRL: (Misra & Maaten, 2020) PiRL first creates a jigsaw transformation of an image (it divides an image into 9 patches and shuffles these patches). It treats an image and its jigsaw as a positive pair, and that of a different image as a negative pair. They additionally modify encoder on the jigsaw branch. SimCLRv1 and SimCLRv2: (Chen et al., 2020a;b) SimCLR also use strong augmentations to create positive and negative pairs. They use random resized crops, random Gaussian blur and random jitters in color space. Crucially, they use a projection head that maps the representations to a 128- dimensional space where they apply the contrastive loss. They do not use a memory bank, but use a large batch size. InfoMin: InfoMin uses random resized crop, color jitter and gaussian blur, as well as jigsaw shuffling from PiRL. D.2 SIMPLE CLASSIFIER (TFIT) After training the representation learning method, we extract representations r for the training and test images. We do not add random augmentations to the training images (unless stated otherwise). Then, we train a simple classifier on the dataset {r(xi), yi}ni=1. We use a linear classifier in most cases, but we also try a small multi-layer perceptron (as long as it has few parameters and does not interpolate the training data). We add weight decay in some methods to achieve good test accuracy (See Table H.4 for values for each method.) For the noisy experiment, we set the noise level to η = 5%. To compute the complexity bound Cdc we run 20 trials of the noisy experiment for CIFAR10 and 50 trials for ImageNet. D.3 EXPERIMENTAL DETAILS FOR EACH PLOT Figure 1. This figure shows the robustness, rationality and memorization gap for various SSS algorithms trained on CIFAR-10. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.3. For the second phase Tfit, we use L2regularized linear regression for all the methods. For each algorithm listed in Table H.3, the figure contains 2 points, one without augmentations, and one with augmentations. Further, we compute the complexity measure Cdc for all the methods. All the values (along with the test accuracy) are listed in Table H.1. Figure 2. This figure shows the robustness, rationality and memorization for CIFAR-10 for all the same methods as in Figure 1. We only include the points without augmentation to show how rationality behaves when (Dtrain,Dtest) are identical. All the values (along with the test accuracy) are listed in Table H.1. For the supervised architectures, we train a Myrtle-5 (Page, 2018) convolutional network, a ResNet-18 (He et al., 2016) and a WideResNet-28-10 (Zagoruyko & Komodakis, 2016) with standard training hyperparameters. Figure 3 and Figure 4. These figures show the robustness, rationality and memorization for the ImageNet dataset. The type of self-supervised method, the encoder architecture, as well as the training hyperparameters are described in Table H.4. For the second phase Tfit, we use L2-regularized linear regression for all the methods. The figures also contain some points with 10 augmentations per training image. Further, we compute the complexity measure Cdc for all three methods - SimCLRv2 with architectures ResNet-50-1x and ResNet-101-2x. All the values (along with the test accuracy) are listed in Table H.2. Figure 5 This figure shows the effect of increasing augmentations. We add t = {2, ..., 10} augmentations and re-train the simple classifier. We do this for the CIFAR-10 dataset, AMDIM selfsupervised training with the AMDIM encoder and linear regression (See Table H.3 for the hyperparameters). D.4 ADDITIONAL RESULTS D.4.1 GENERALIZATION ERROR OF SSS ALGORITHMS To show that SSS algorithms have qualitatively different generalization behavior compared to standard end-to-end supervised methods, we repeat the experiment from Zhang et al. (2017). We randomize all the training labels in the CIFAR-10 dataset and train 3 high-performing SSS methods on these noisy labels. For results see Table D.1. Unlike fully supervised methods, SSS algorithms do not achieve 100% training accuracy on the dataset with noisy labels. In fact, their training accuracies are fairly low (≈ 15-25%). This suggests that the empirical Rademacher complexity is bounded. The algorithms were trained without any augmentations during the simple fitting phase for both SSS and supervised algorithms. The SSS methods were trained using parameters described in Table H.3. D.5 RRM BOUND WITH VARYING NOISE PARAMETER We now investigate the effect of varying noise levels on the three gaps as well as on the complexity. We see that the robustness gap increases as we add more noise—this is expected as noise should affect the clean training accuracy. We also observe that the memorization gap decreases, suggesting that Cdcη as a function of η goes down faster than η 2 (see Appendix B). The Theorem II bound on memorization gap also decays strongly with the η, becoming more tight as the noise increases. D.5.1 CONVERGENCE OF COMPLEXITY MEASURES We now plot the complexity measures Cdc and Cpc with increasing number of trials for one of the SSS algorithms. As expected, Cdc < Cpc and Cdc converges in about 20 trials for CIFAR-10. On the other hand, the complexity computations for ImageNet need many more trials for convergence, since it contains about 10 augmentations×1.2 million training samples making it cost prohibitive to compute for all the methods. For the CIFAR-10, we use AMDIM with the AMDIM encoder architecture without augmentations. For ImageNet, we use SimCLRv2 with the ResNet-101 architecture with 10 augmentations per training sample. E EXAMPLES OF ALGORITHMS WITH LARGE GAPS While we argued that SSS algorithms will tend to have small robustness, rationality, and memorization gaps, this does not hold in the worst case and there are examples of such algorithms that exhibit large gaps in each of those cases. E.1 LARGE ROBUSTNESS GAP Large robustness gap can only arise via computational (as opposed to statistical) considerations. That is, if a training procedure outputs a classifier f ∈ F that achieves on average accuracy α on a clean train set (X,Y ), then with high probability, if (X, Ỹ ) is an η-noisy train set then there exists f ∈ F that achieves α(1− η) accuracy on this train set (by fitting only the “clean” points). However, the training algorithm might not always be able to find such a classifier. For example, if the distribution has the form (x, y) = (x, ∑ ajxj mod 2) where x ∼ GF (2)` = Z`2 and a ∈ GF (2)` is some hidden vector, then there is an efficient algorithm (namely Gaussian elimination) to find a given the samples (x, y) and hence get accuracy 1. However, for every ε > 0 and η > 0, there is no known efficient algorithm that, given a 1− η perturbed equations of the form {〈a, xi〉 = ỹi}i∈[n] finds a′ ∈ GF (2)` such that ∑ a′jxj = ∑ ajxj mod 2 on a 1/2 + ε fraction of the x’s. This is known as the learning parity with noise (LPN) problem (Blum et al., 1993). The assumption of robustness is necessary for a small generalization gap, in the sense that we can come up with (contrived) examples of algorithms that have small rationality and memorization gaps while still having large generalization gap. For example, consider an algorithm T that has large generalization gap (high train accuracy and small test accuracy) , and suppose we augment to the following algorithm T ′(x,y) = { T (x,y) if y is “clean” 0 if y is “noisy” where 0 denotes the constant zero function (e.g., some trivial classifier) and we use some algorithm to estimate whether or not the labels are noisy. (Such estimates can often be achieved in many natural cases.) The algorithm T ′ will inherit the generalization gap of T , since that depends only on the experiment without noise. Since performance on noisy and clean training samples will be the same (close to random), will have zero memorization gap. Since we have assumed small test accuracy, it will have zero rationality gap also. E.2 LARGE RATIONALITY GAP As discussed in Section C, in the case that Dtrain = Dntest, a robust algorithm with large rationality gap leaves “performance on the table”. We can obtain such algorithms by artificially dropping performance on the test data. For example, in the SSS framework, since the representation r is over-parameterized and can memorize the entire train set, we can consider the trivial representation r(x) = { x x in train set 0 otherwise If we now train some simple classifier on r(x) then it can have non-trivial performance on the noisy train samples, while getting trivial accuracy on all samples outside the train set. In cases where Dtrain and Dtest are different (for example when Dtrain is an augmented version of Dtest) then we can no longer claim that a large rationality gap corresponds to “leaving performance on the table”. For example, we do observe (mild) growth in the rationality gap as we add more augmented points to the training set. E.3 LARGE MEMORIZATION GAP It is not hard to find examples of networks with large memorization gap. Indeed, as mentioned before, any standard interpolating supervised learning algorithm will get a memorization gap close to 1. F ROBUSTNESS OF LEAST SQUARES CLASSIFIERS One can prove robustness for classes of algorithms under varying assumptions. As a simple example, we record here a self-contained observation of how margin leads to robustness in least squares minimization. (We believe that this bound is folklore, but weren’t able to find the right reference.) This is a very simple but also pessimistic bound, and much better ones often hold. Lemma F.1 . Let x1, . . . , xn ∈ Rd and y1, . . . , yn ∈ [k], and consider a linear function f : Rd → Rk that minimizes the quantity ∑ i∈[n],j∈[k] |f(xi)j−1yi=j |2, and suppose that for p fraction of the i’s, the maximum over j ∈ [k] of f(xi) is γ larger than the second-largest value. Then in expectation, if we let ỹ be the η-noisy version of y and f̃ minimizes ∑ i∈[n],j∈[k] |f̃(xi)j − 1ỹi=j |2, we get that arg maxj f̃(xi) = yi for at least p− 4η/γ2 fraction of the i’s. Proof. We identify y with its “one hot” encoding as a vector in Rnk. Let V ⊆ Rnk be the subspace of all vectors of the form (g(x1), . . . , g(xn)) for linear g : Rd → Rk. If f is the minimizer in the theorem statement, and p = (f(x1), . . . , f(xn)) then p = ΠV y where ΠV is the orthogonal projection to the subspace v. If f̃ is the minimizer for the noisy labels and p̃ = (f̃(x1), . . . , f̃(xn)), then p̃ = ΠV ỹ = ΠV (y + e) where e is the noise vector ỹ − y. Hence ‖p − p̃‖ = ‖ΠV e‖ ≤ ‖e‖. But in expectation ‖e‖2 ≤ 2ηn (since we flip a label with probability ≤ η). For every point i for which the margin was at least γ in p, if p̃’s prediction is different in i, then the contribution of the i-th block to their square norm difference is at least γ2/2 (by shifting the maximum coordinate by −γ/2 and the second largest one by γ/2). Hence at most 4ηn/γ2 of these points could have different predictions in p and p̃ G ROBUSTNESS OF EMPIRICAL RISK MINIMIZER The (potentially inefficient) algorithm that minimizes the classification errors is always robust. Lemma G.1 . Let T (x,y) = arg minf∈F ∑n i=1 1f(xi)6=yi . Then for every η > 0, Robustness gap(T ) ≤ 2η . Proof. Let x,y be any train set, and let α = ming∈F ∑n i=1 1g(xi)6=yi and f be the minimizer of this quantity. Let ỹ be the η-noisy version of y and let η̃ be the fraction of i on which yi 6= ỹi. Then, n∑ i=1 1f(xi)6=yi ≤ α+ η̃ . (7) Hence if f̃ is the minimizer of (7) then we know that f̃(xi) 6= ỹi for at most α + η̃ fraction of the i’s, and so f̃(xi) 6= yi for at most α + 2η̃ fraction of the i’s. Since the train accuracy of T is 1− α and in expectation of η̃ is η, we get that in expectation TrainT (η) ≥ TrainT − 2η H LARGE TABLES Table H.1 – Summary of all the methods, architectures and the corresponding results (gaps and accuracies) on CIFAR-10, sorted by generalization gap. While Figure 1 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc mocov2 resnet18 True -7.35 0.07 0.21 0.00 3.47 0.28 67.19 mocov2 wide resnet50 2 True -6.37 0.18 1.03 0.00 7.63 1.21 70.99 mocov2 resnet101 True -6.01 0.15 0.71 0.00 6.38 0.86 68.58 mocov2 resnet50 True -5.38 0.19 0.84 0.00 6.99 1.03 69.68 simclr resnet50 True -2.89 0.30 0.55 0.00 6.63 0.85 91.96 amdim resnet101 True -0.91 0.64 3.70 0.00 25.99 4.34 63.56 amdim resnet18 True 0.33 0.23 1.15 0.00 8.66 1.38 62.84 mocov2 resnet18 False 1.43 0.15 1.24 0.03 14.14 1.43 67.60 simclr resnet18 False 1.43 0.28 0.79 0.36 13.35 1.43 82.50 amdim wide resnet50 2 True 1.60 0.69 2.46 0.00 19.20 3.15 64.38 simclr resnet50 False 1.97 0.22 0.78 0.97 15.75 1.97 92.00 simclr resnet50 False 2.24 0.52 1.71 0.01 19.53 2.24 84.94 mocov2 resnet50 False 2.72 0.30 2.96 0.00 24.18 3.26 70.09 mocov2 resnet101 False 2.82 0.33 3.03 0.00 22.78 3.36 69.08 mocov2 wide resnet50 2 False 3.11 0.38 2.79 0.00 22.39 3.18 70.84 amdim resnet50 bn True 3.69 0.84 4.22 0.00 31.12 5.06 66.44 amdim resnet18 False 4.34 0.42 4.58 0.00 33.47 5.00 62.28 amdim amdim encoder True 4.43 0.68 0.36 3.39 10.32 4.43 87.33 amdim amdim encoder False 6.68 2.08 5.69 0.00 70.52 7.77 87.38 amdim resnet101 False 12.46 1.22 14.26 0.00 100.00 15.49 62.43 amdim wide resnet50 2 False 13.07 1.70 15.33 0.00 100.00 17.03 63.80 amdim resnet50 bn False 14.73 1.81 16.63 0.00 100.00 18.43 66.28 Table H.2 – Summary of all the methods, architectures their corresponding results (gaps and accuracies) on ImageNet, sorted by generalization gap. While Figure 4 already plots this data, here we also provide the test performance of the corresponding models. Method Backbone DataAug Generalization Gap Robustness Memorization Rationality Theorem II bound RRM bound Test Acc simclrv2 r50 1x sk0 True -2.34 0.26 0.68 0.00 46.93 0.94 70.96 simclrv2 r101 2x sk0 True 0.63 0.10 0.80 0.00 47.90 0.91 77.24 simclrv2 r152 2x sk0 True 1.00 0.13 0.77 0.10 NA 1.00 77.65 moco ResNet-50 True 1.32 0.57 0.93 0.00 NA 1.49 70.15 InfoMin ResNet-50 True 4.88 0.81 1.01 3.06 NA 4.88 72.29 PiRL ResNet-50 True 6.23 0.29 0.99 4.95 NA 6.23 60.56 InsDis ResNet-50 True 6.85 0.25 1.13 5.46 NA 6.85 58.30 simclrv2 r101 1x sk1 False 8.23 0.71 4.66 2.86 NA 8.23 76.07 InfoMin ResNet-50 False 10.21 2.34 8.96 0.00 NA 11.31 70.31 simclrv2 r152 1x sk0 False 10.32 1.12 6.93 2.26 NA 10.32 74.17 simclrv2 r101 1x sk0 False 10.53 1.11 6.99 2.42 NA 10.53 73.04 simclrv2 r50 1x sk0 False 10.62 0.99 7.31 2.31 NA 10.62 70.69 moco ResNet-50 False 10.72 1.82 7.86 1.04 NA 10.72 68.39 simclrv2 r152 2x sk0 False 10.92 0.75 7.45 2.72 NA 10.92 77.25 simclrv2 r101 2x sk0 False 11.02 0.74 7.51 2.78 NA 11.02 76.72 simclr ResNet50 1x False 11.07 1.22 7.73 2.13 NA 11.07 68.73 simclrv2 ResNet-50 False 11.16 0.64 7.67 2.85 NA 11.16 74.99 PiRL ResNet-50 False 11.43 1.49 8.26 1.68 NA 11.43 59.11 InsDis ResNet-50 False 12.02 1.40 8.52 2.10 NA 12.02 56.67 amdim ResNet-50 False 13.62 0.90 9.72 3.01 NA 13.62 67.69 CMC ResNet-50 False 14.73 2.30 12.30 0.13 NA 14.73 54.60 bigbigan ResNet-50 False 29.60 3.13 25.19 1.27 NA 29.60 50.24 Table H.3 – Summary of training methods with their hyper-parameters for CIFAR-10 Selfsupervised method Backbone Architectures Self-supervised Training Evaluation Simple Phase Optimization AMDIM AMDIM Encoder PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-18 ResNet-50 WideResNet-50 ResNet 101 MoCoV2 ResNet-18 PLB Default parameters Linear Adam β1 = 0.8 β2 = 0.999 Constant LR = 2e-4 Batchsize = 500 Weight decay = 1e-6 ResNet-50 WideResNet-50 ResNet 101 SimCLR ResNet-18 Batchsize = 128 Epochs 200 Linear SGD Momentum = 0.9 Constant LR = 0.1 Weight decay 1e-6 ResNet-50 ResNet-50 Batchsize = 512Epochs 600 Table H.4 – Summary of training methods with their hyper-parameters for ImageNet Self-supervised method Backbone Architecture Pre-trained Model Evaluation Optimization Weight Decay Epochs Instance Discrimination ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 MoCo ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 PiRL ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 CMC ResNet-50 PyContrast Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {30} by factor 0.2 0 40 AMDIM AMDIM Encoder Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-3 40 BigBiGAN ResNet-50 Official Linear SGD Momentum = 0.9 Initial LR = 30 LR drop at {15, 25} by factor 0.2 1e-5 40 SimCLRv1 ResNet-50 1x Official Linear SGDMomentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-50 4x SimCLRv2 ResNet-50 1x SK0 Official Linear SGD Momentum = 0.9 Constant LR = 0.1 1e-6 40ResNet-101 2x SK0ResNet-152 2x SK0 ResNet-152 3x SK0
1. What are the concerns regarding the paper's claims about bounding the generalization gap? 2. How does the reviewer assess the robustness gap and rationality gap discussed in the paper? 3. Is the proposed method effective in indicating the generalization of the algorithm? 4. What are the issues with the bound proposed by the authors? 5. Do the authors provide sufficient theoretical discussion and analysis of the three quantities they introduce? 6. How does the reviewer evaluate the overall quality and contribution of the paper?
Review
Review The authors propose to upper bound the generalization gap via three quantities, namely robustness gap, rationality gap and memorization gap, shows that the memorization gap can be bounded via standard learning theory arguments, and empirically show that all of the three terms are small. The authors also argue that if the rationality gap is large, then the performance can be improved. First of all, I think this paper is highly over-claimed. I don’t see how the proposed methods provably indicates the generalization. In fact, there is no theoretical conclusion on bounding the rationality gap and little theoretical discussion on robustness gap. Instead, the authors only show the empirical estimation on the robustness gap and rationality gap. I would like to say, such inaccurate claim makes me feel uncomfortable. I would like to argue that, we cannot know the exact rationality gap, as we don’t have the data distribution at any time, thus we need a generalization bound to describe the performance of algorithm on unseen data. How do the authors deal with the rationality gap? I don’t feel empirical estimation on ‘test set’ is an acceptable choice, as ‘test set’ is only a batch of sample of real data distribution. The bound proposed by authors is not a generalization bound, thus it is meaningless to talk about the bound is vacuous or not. Moreover, as we don’t know the exact rationality gap, the claim by Theorem 3.1 is also not meaningful. In other words, even if we find the rationality gap is large when evaluating on test data, what we really do is tuning the model using the test data, not improving the performance of the model on data distribution. The authors argue in the abstract that the bound is independent of the complexity of the representation. However, several properties of the representation, e.g. the dimension, will definitely influence the generalization bound. I don’t feel this argument well-supported. Overall, the decomposition itself may motivate new idea on improving the current algorithms. However, theoretically, I don’t think this paper is a rigorous paper considering the generalization bound. If the authors want to argue the decomposition have some insight on improving algorithm, the authors should focus more on the intuition, algorithm design and empirical justification. If the authors want to argue the decomposition indicate tight generalization bound, then the authors should give rigorous proof on the bound of all three terms and calculate the bound based on the theoretical prediction instead of empirical simulation. There can be some misunderstanding on some of the points in the paper, but overall, with the current presentation, I think this paper is not ready for acceptance.
ICLR
Title Spatio-temporal point processes with deep non-stationary kernels Abstract Point process data are becoming ubiquitous in modern applications, such as social networks, health care, and finance. Despite the powerful expressiveness of the popular recurrent neural network (RNN) models for point process data, they may not successfully capture sophisticated non-stationary dependencies in the data due to their recurrent structures. Another popular type of deep model for point process data is based on representing the influence kernel (rather than the intensity function) by neural networks. We take the latter approach and develop a new deep non-stationary influence kernel that can model non-stationary spatio-temporal point processes. The main idea is to approximate the influence kernel with a novel and general low-rank decomposition, enabling efficient representation through deep neural networks and computational efficiency and better performance. We also take a new approach to maintain the non-negativity constraint of the conditional intensity by introducing a log-barrier penalty. We demonstrate our proposed method’s good performance and computational efficiency compared with the state-of-the-art on simulated and real data. 1 INTRODUCTION Point process data, consisting of sequential events with timestamps and associated information such as location or category, are ubiquitous in modern scientific fields and real-world applications. The distribution of events is of great scientific and practical interest, both for predicting new events and understanding the events’ generative dynamics (Reinhart, 2018). To model such discrete events in continuous time and space, spatio-temporal point processes (STPPs) are widely used in a diverse range of domains, including modeling earthquakes (Ogata, 1988; 1998), the spread of infectious diseases (Schoenberg et al., 2019; Dong et al., 2021), and wildfire propagation (Hering et al., 2009). A modeling challenge is to accurately capture the underlying generative model of event occurrence in general spatio-temporal point processes (STPP) while maintaining the model efficiency. Specific parametric forms of conditional intensity are proposed in seminal works of Hawkes process (Hawkes, 1971; Ogata, 1988) to tackle the issue of computational complexity in STPPs, which requires evaluating the complex multivariate integral in the likelihood function. They use an exponentially decaying influence kernel to measure the influence of a past event over time and assume the influence of all past events is positive and linearly additive. Despite computational simplicity (since the integral of the likelihood function is avoided), such a parametric form limits the model’s practicality in modern applications. Recent models use neural networks in modeling point processes to capture complicated event occurrences. RNN (Du et al., 2016) and LSTM (Mei and Eisner, 2017) have been used by taking advantage of their representation power and capability in capturing event temporal dependencies. However, the recurrent structures of RNN-based models cannot capture long-range dependency (Bengio et al., 1994) and attention-based structure (Zhang et al., 2020; Zuo et al., 2020) is introduced to address such limitations of RNN. Despite much development, existing models still cannot sufficiently capture spatio-temporal non-stationarity, which are common in real-world data (Graham et al., 2013; Dong et al., 2021). Moreover, while RNN-type models may produce strong prediction performance, the models consist of general forms of network layers and the modeling power relies on the hidden states, thus often not easily interpretable. A promising approach to overcome the above model restrictions is point process models that combine statistical models with neural network representation, such as Zhu et al. (2022) and Chen et al. (2020), to enjoy both the interpretability and expressive power of neural networks. In particular, the idea is to represent the (possibly non-stationary) influence kernel based on a spectral decomposition and represent the basis functions using neural networks. However, the prior work (Zhu et al., 2022) is not specifically designed for non-stationary kernel and the low-rank representation can be made significantly more efficient, which is the main focus of this paper. Contribution. In this paper, we develop a non-stationary kernel (referred to as DNSK) for (possibly non-stationary) spatio-temporal processes that enjoy efficient low-rank representation, which leads to much improved computational efficiency and predictive performance. The construction is based on an interesting observation that by reparameterize the influence kernel from the original form of k(t′, t), (where t′ is the historical even time, and t is the current time) to an equivalent form k(t′, t − t′) (which thus is parameterized by the displacement t− t′ instead), the rank can be reduced significantly, as shown in Figure 1. This observation inspired us to design a much more efficient representation of the non-stationary point processes with much fewer basis functions to represent the same kernel. In summary, the contributions of our paper include • We introduce an efficient low-rank representation of the influence kernel based on a novel “dis- placement” re-parameterization. Our representation can well-approximate a large class of general non-stationary influence kernels and is generalizable to spatio-temporal kernels (also potentially to data with high-dimensional marks). Efficient representation leads to lower computational cost and better prediction power, as demonstrated in our experiments. • In model fitting, we introduce a log-barrier penalty term in the objective function to ensure the non-negative conditional intensity function so the model is statistically meaningful, and the problem is numerically stable. This approach also enables the model to learn general influence functions (that can have negative values), which is a drastic improvement from existing influence kernel-based methods that require the kernel functions to be non-negative. • Using extensive synthetic and real data experiments, we show the competitive performance of our proposed methods in both model recovery and event prediction compared with the state-of-the-art, such as the RNN-based and transformer-based models. 1.1 RELATED WORKS The original work of A. Hawkes (Hawkes, 1971) provides classic self-exciting point processes for temporal events, which express the conditional intensity function with an influence kernel and a base rate. Ogata (1998) proposes a parametric form of spatio-temporal influence kernel which enjoys strong model interpretability and efficiency. However, such simple parametric forms own limited expressiveness in characterizing the complex events’ dynamic in modern applications (Zhu et al., 2021; Liao et al., 2022). Neural networks have been widely adopted in point processes (Xiao et al., 2017; Chen et al., 2020; Zhu et al., 2021). Du et al. (2016) incorporates recurrent neural networks and Mei and Eisner (2017) use a continuous-time invariant of LSTM to model event influence with exponential decay over time. These RNN-based models may be unable to capture complicated event dependencies due to the recurrent structure. Zhang et al. (2020); Zuo et al. (2020) introduce self-attentive structures into point processes for their capability to memorize long-term influence by dealing with an event sequence as a whole. The main limitation is that they assume a dot-product-based score function and assume linearly decaying of event influence. Omi et al. (2019) propose a fully-connected neural network to model the cumulative intensity function to go beyond parametric decaying influence. However, the event embeddings are still generated by RNN, and fitting cumulative intensity function by neural networks lacks model interpretability. Note that all the above models tackle temporal events with categorical marks, which are inapplicable in continuous time and location space. Recent works adopt neural networks in learning the influence kernel function. The kernel introduced in Okawa et al. (2021) uses neural networks to model the latent dynamic of time interval but still assumes an exponentially decaying influence over time. Zhu et al. (2022) proposes a kernel representation using spectral decomposition and represents feature functions using deep neural networks to harvest powerful model expressiveness when dealing with marked event data. Our method considers an alternative novel kernel representation that allows the general kernel to be expressed further low-rankly. 2 BACKGROUND Spatio-temporal point processes (STPPs) (Reinhart, 2018; Moller and Waagepetersen, 2003) have been widely used to model sequences of random events that happen in continuous time and space. Let H = {(ti, si)}ni=1 denote the event stream with time ti ∈ [0, T ] ⊂ R and location si ∈ S ⊂ RdS of ith event. The event number n is also random. Given the observed historyHt = {(ti, si) ∈ H|ti < t} before time t, an STPP is then fully characterized by the conditional intensity function λ (t, s | Ht) = lim ∆t↓0,∆s↓0 E [N([t, t+∆t]×B(s,∆s)) | Ht] |B(s,∆s)|∆t , (1) where B(s,∆s) is a ball centered at s ∈ RdS with radius ∆s, and the counting measure N is defined as the number of events occurring in [t, t + ∆t] × B(s,∆s) ⊂ RdS+1. Naturally λ (t, s|Ht) ≥ 0 for any arbitrary t and s. In the following, we omit the dependency on historyHt and use common shorthand λ(t, s). The log-likelihood of observingH on [0, T ]× S is given by (Daley et al., 2003) ℓ(H) = n∑ i=1 log λ (ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt (2) Neural point processes parameterize the conditional intensity function by taking advantage of recurrent neural networks (RNNs). In Du et al. (2016), an input vector xi which extracts the information of event ti and the associated event attributes mi (can be event mark or location) is fed into the RNN. A hidden state vector hi is updated by hi = ρ(hi−1,xi), where ρ(·) is a mapping fulfilled by recurrent neural network operations. The conditional intensity function on (ti, ti+1] is then defined as λ(t) = δ(t,hi), where δ is an exponential transformation that guarantees a positive intensity. In Mei and Eisner (2017) the RNN is replaced by a continuous-time LSTM module with hidden states h(t) defined on [0, T ] and a Softplus function δ. Attention-based models are introduced in Zuo et al. (2020); Zhang et al. (2020) to overcome the inability of RNNs to capture sophisticated event dependencies due to their recurrent structures. Hawkes process (Hawkes, 1971) is a well-known generalized point process model. Assuming the influences from past events are linearly additive, the conditional intensity function takes the form of λ(t, s) = µ+ ∑ (t′,s′)∈Ht k(t′, t, s′, s), (3) where k is an influence kernel function that captures event interactions. Commonly the kernel function is assumed to be stationary, that is, k only depends on t − t′ and s − s′, which limits the model expressivity. In this work, we aim to capture complicated non-stationarity in spatio-temporal event dependencies by leveraging the strong approximation power of neural networks in kernel fitting. 3 LOW-RANK DEEP NON-STATIONARY KERNEL Due to the intricate dependencies between events, it is challenging to choose the form of kernel function that achieves great model expressiveness while enjoying high model efficiency. In this section, we introduce a unified model with a low-rank deep non-stationary kernel to capture the complex heterogeneity in events’ influence over spatio-temporal space. 3.1 KERNEL WITH HISTORY AND SPATIO-TEMPORAL DISPLACEMENT For the influence kernel function k(t′, t, s′, s), by using the displacements in t and s as variables, we first re-parameterize the kernel as k(t′, t−t′, s′, s−s′), where the minus in s−s′ refers to element-wise difference between s and s′ when dS > 1. Then we achieve a finite-rank decomposed representation based on (truncated) singular value decomposition (SVD) for kernel functions (Mollenhauer et al., 2020) (which can be understood as the kernel version of matrix SVD, where the eigendecomposition is based on Mercer’s Theorem (Mercer, 1909)), and that the decomposed spatial (and temporal) kernel functions can be approximated under shared basis functions (cf. Assumption A.2). The resulting approximate finite-rank representation is written as (details are in Appendix A.1) k(t′, t− t′, s′, s− s′) = R∑ r=1 L∑ l=1 αlrψl(t ′)φl(t− t′)ur(s′)vr(s− s′). (4) Here {ψl, φl : [0, T ]→ R, l = 1, . . . , L} are two sets of temporal basis functions that characterize the temporal influence of event at t′ and the decaying effect brought by elapsed time t− t′. Similarly, spatial basis functions {ur, vr : S → R, r = 1, . . . , R} capture the spatial influence of event at s′ and the decayed influence after spreading over the displacement of s − s′. The corresponding weight αlr at different spatio-temporal ranks combines each set of basis functions into a weighted summation, leading to the final expression of influence kernel k. To further enhance the model expressiveness, we use a fully-connected neural network to represent each basis function. The history or displacement is taken as the input and fed through multiple hidden layers equipped with Softplus non-linear activation function. To allow for inhibiting influence from past events (negative value of influence kernel k), we use a linear output layer for each neural network. For an influence kernel with temporal rank L and spatial rank R, we need 2(L + R) independent neural networks for modeling. The benefits of our proposed kernel framework lies in the following: (i) The kernel parameterization with displacement significantly reduces the rank needed when representing the complicated kernel encountered in practice as shown in Figure 1. (ii) The non-stationarity of original influence of historical events over spatio-temporal space can be conveniently captured by in-homogeneous {ψl}Ll=1, {ur}Rr=1, making the model applicable in modeling general STPPs. (iii) The propagating patterns of influence are characterized by {φl}Ll=1, {vr}Rr=1 which go beyond simple parametric forms. In particular, when the events’ influence has finite range, i.e. there exist τmax and amax such that the influence decays to zero if |t− t′| > τmax or ||s− s′|| > amax, we can restrict the parameterization of {φl}Ll=1 and {vr}Rr=1 on a local domain [0, τmax] × B(0, amax) instead of [0, T ] × S, which further reduce the model complexity. Details of choosing kernel and neural network architectures are described in Appendix C. Remark 1 (the class of influence kernel expressed). The proposed deep kernel representation covers a large class of non-stationary kernels generally used in STPPs. In particular, the proposed form of the kernel does not need to be positive semi-definite or even symmetric (Reinhart, 2018). The low-rank decomposed formulation equation 4 is of SVD-type (cf. Appendix A.1). While each φl (and vr) can be viewed as stationary (i.e., shift-invariant), the combination with left modes in the summation enables to model spatio-temporal non-stationarity. The technical assumptions A.1 and A.2 do not require more than the existence of a low-rank decomposition motivated by kernel SVD. As long as the 2(R+ L) many functions ψl, φl, and ur, vr are sufficiently regular, they can be approximated and learned by a neural network. The universal approximation power of neural networks enables our framework to express a broad range of general kernel functions, and the low-rank decomposed form reduces the modeling of a spatio-temporal kernel to finite many functions on time and space domains (the right modes are on truncated domains), respectively. 4 EFFICIENT COMPUTATION OF MODEL We consider model optimization through Maximum likelihood estimation (MLE) (Reinhart, 2018). The resulting conditional intensity function could now be negative by allowing inhibiting historical influence. A common approach to guarantee the non-negativity is to adopt a nonlinear positive activation function in the conditional intensity (Du et al., 2016; Zhu et al., 2022). However, the integral of such a nonlinear intensity over spatio-temporal space is computationally expensive. To tackle this, we first introduce a log-barrier to the MLE optimization problem to guarantee the non-negativity of conditional intensity function λ and maintain its linearity. Then we provide a computationally efficient strategy that benefits from the linearity of the conditional intensity. The extension of the approach to point process data with marks is given in Appendix B. 4.1 MODEL OPTIMIZATION WITH LOG-BARRIER We re-denote ℓ(H) in equation 2 by ℓ(θ) in terms of model parameter θ. The constrained MLE optimization problem for model parameter estimation can be formulated as: min θ −ℓ(θ), s.t.− λ(t, s) ≤ 0, ∀t ∈ [0, T ],∀s ∈ S. Introduce a log-barrier method (Boyd et al., 2004) to ensure the non-negativity of λ, and penalize the values of λ on a dense enough grid Ubar,t × Ubar,s ⊂ [0, T ]× S . The log-barrier is defined as p(θ, b) := − 1 |Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (5) where ct, cs indicate the index of the gird, and b is a lower bound of conditional intensity function on the grid to guarantee the feasibility of logarithm operation. The MLE optimization problem can be written as min θ L(θ) := −ℓ(θ) + 1 w p(θ, b) = − ( n∑ i=1 log λ(ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt ) − 1 w|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (6) where w is a weight to control the trade-off between log-likelihood and log-barrier; w and b can be set accordingly during the learning procedure. Details can be found in Appendix A.2. Note that previous works (Du et al., 2016; Mei and Eisner, 2017; Pan et al., 2021; Zuo et al., 2020; Zhu et al., 2022) use a scaled positive transformation to guarantee non-negativity conditional intensity function. Compared with them, the log-barrier method preserves the linearity of the conditional intensity function. As shown in Table 1, such a log-barrier method enables efficient model computation (See more details in Section 4.2) and enhance the model recovery power. 4.2 MODEL COMPUTATION The log-likelihood computation of general STPPs (especially those with general influence function) is often difficult and requires numerical integral and thus time-consuming. Given a sequence of events {xi = (ti, si)}ni=1 of number n, the complexity of neural network evaluation is of O(n2) for the term of log-summation and of O(Kn) (K ≫ n) when using numerical integration for the double integral term with K sampled points in a multi-dimensional space. In the following, we circumvent the calculation difficulty by proposing an efficient computation for L(θ) with complexity O(n) of neural network evaluation through a domain discretization strategy. Computation of log-summation. The first log-summation term in equation 2 can be written as: n∑ i=1 log λ(ti, si) = n∑ i=1 log µ+ ∑ tj<ti R∑ r=1 L∑ l=1 αlrψl(tj)φl(ti − tj)ur(sj)vr(si − sj) . (7) Note that each ψl only needs to be evaluated at event time {ti}ni=1 and each ur is evaluated at all the event location {si}ni=1. To avoid the redundant evaluations of φl over every pair of (ti, tj), we set up a uniform grid Ut over time horizon [0, τmax] and evaluate φl on the grid. The value of φl(tj − ti) can be obtained by linear interpolation with values on two adjacent grid points of tj − ti. By doing so, we only need to evaluate φl for |Ut| times on the grids. Note that φl can be simply feed with 0 when tj − ti > τmax without any neural network evaluation. Here we directly evaluate vr(si − sj) since numerical interpolation is less accurate in location space. Note that one does not need to evaluate every pair of index (i, j). Instead, we have I := {(i, j) | vr(si − sj) will be computed} = {(i, j) | tj < ti ≤ tj + τmax} ∩ {(i, j) | ∥si − sj∥ ≤ amax}. We use 0 to other pairs of (i, j). Computation of integral. A benefit of our approach is that we avoid numerical integration for the conditional intensity function (needed to evaluate the likelihood function), since the design of the kernel allows us to decompose the desired integral to integrating basis functions. Specifically, we have ∫ T 0 ∫ S λ(t, s)dsdt = µ|S|T + n∑ i=1 ∫ T 0 ∫ S I(ti < t)k(ti, t, si, s)dsdt = µ|S|T + n∑ i=1 R∑ r=1 ur(si) ∫ S vr(s− si)ds L∑ l=1 αrlψl(ti) ∫ T−ti 0 φl(t)dt. (8) To compute the integral of φl, we take the advantage of the pre-computed φl on the grid Ut. Let Fl(t) := ∫ t 0 φl(τ)dτ . Then Fl(T − ti) can be computed by the linear interpolation of values of Fl at two adjacent grid points (in Ut) of T − ti. In particular, Fl evaluated on Ut equals to the cumulative sum of φl divided by the grid width. The integral of vr can be estimated based on a grid Us in B(0, amax) ⊂ RdS since it decays outside the ball. For each si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds, where S − si := {s ′ | s′ = s− si, s ∈ S}. Thus the integral is well estimated with the evaluations of vr on grid set Us ∩ S − si. Note that in practice we only evaluate vr on Us once and use subsets of the evaluations for different si. More details about grid-based computation can be found in Appendix A.3. Computation of log-barrier. The barrier term p(θ, b) is calculated in a similar way as equation 7 by replacing (ti, si, µ) with (tct , scs , µ− b), i.e. we use interpolation to calculate φl(tct − tj) and evaluate vr on a subset of {(scs , sj)}, cs = 1, . . . , |Ubar,s|, j = 1, . . . , n. 4.3 COMPUTATIONAL COMPLEXITY The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R+ L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. The evaluation of {vr}Rr=1 costs no more thanO(RCτmaxn)+O(R|Us|) complexity. We note that L,R, τmax, |Ut|, |Us| are all constant that much less than event number n, thus the overall computation complexity will beO(n). We compare the model training time per epoch for a baseline equipped with a softplus activation function (NSMPP) and our model with log-barrier method (DNSK+Barrier) on a 1D synthetic data set and a 3D synthetic data set. The quantitative results in Table 1 demonstrates the efficiency improvement of our model by using log-barrier technique. More details about the computation complexity analysis can be found in Appendix A.4. 5 EXPERIMENT We use large-scale synthetic and real data sets to demonstrate the superior performance of our model and present the results in this section. Experimental details and results can be found in Appendix C. Codes will be released upon publication. Baselines. We compare our method (DNSK+Barrier) with: (i) RMTPP (RMTPP) (Du et al., 2016); (ii) Neural Hawkes (NH) (Mei and Eisner, 2017); (iii) Transformer Hawkes process (THP) (Zuo et al., 2020); (iv) Parametric Hawkes process (PHP+exp) with exponentially decaying spatiotemporal kernel; (v) Neual spectral marked point processes (NSMPP) (Zhu et al., 2022); (vi) DNSK without log-barrier but with a non-negative Softplus activation function (DNSK+Softplus). We note that RMTPP, NH and THP directly model conditional intensity function using neural networks while others learn the influence kernel in the framework of equation 3. In particular, NSMPP designs the kernel based on singular value decomposition but parameterizes it without displacement. The model parameters are estimated using the training data via Adam optimization method (Kingma and Ba, 2014). Details of training can be found in Appendix A.2 and C. 5.1 SYNTHETIC DATA EXPERIMENTS Synthetic data sets. To show the effectiveness of DNSK+Barrier, we conduct all the models on three temporal data sets and three spatio-temporal data sets generated by following true kernels: (i) 1D exponential kernel (ii) 1D non-stationary kernel; (iii) 1D infinite rank kernel; (iv) 2D exponential kernel; (v) 3D non-stationary inhibition kernel; (vi) 3D non-stationary mixture kernel. Data sets are generated using thinning algorithm in Daley and Vere-Jones (2008). Each data set is composed of 2000 sequences. Details of kernel formulas and data generation can be found in Appendix C. We consider two performance metrics for testing data evaluation: Mean relative error (MRE) of the predicted intensity and log-likelihood. The true and predicted λ∗(x), λ̂(x) can be calculated using equation 4 with true and learned kernel. The MRE for one test trajectory is defined as∫ X |λ ∗(x)− λ̂(x)|/λ∗(x)dx and the averaged MRE over all test trajectories is reported. The loglikelihood for observing each testing sequence can be computed according to equation 2, and the average predictive log-likelihood per event is reported. The log-likelihood shows the model’s goodness-of-fit, and the intensity evaluation further reflects the model’s ability to recover the underlying mechanism of event occurrence and predict the future. The heat maps in Figure 2 visualize the results of non-stationary kernel recovery for DNSK+Barrier and NSMPP on 1D Data set 2 and 3 (The true kernel used in 1D Data set 3 is the one in Figure 1). DNSK+Barrier recovers the true kernel more accurately than NSMPP, indicating the strong representation power of the low-rank kernel parameterization with displacements. Line charts in Figure 2 present the recovered intensities with the true ones (dark grey curves). It demonstrates that our method can accurately capture the temporal dynamics of events. In particular, the average conditional intensity λ over multiple testing sequences shows the model’s ability to recover data non-stationarity over time. While DNSK+Barrier successfully captures the non-stationarity among data, both RMTPP and NH fail to do so by showing a flat curve of the averaged intensity. Note that THP with positional encoding recovers the data non-stationarity (as shown in two figures in the last column). However, our method still outperforms THP which suffers from limited model expressiveness when complicated propagation of event influence is involved (see two figures in the penultimate column). Tabel 2 summarized the quantitative results of testing log-likelihood and MRE. It shows that DNSK+Barrier has superior predictive performance against baselines in characterizing the dynamics of data generation in spatio-temporal space. Specifically, with evidently over-parameterization for 1D Data set 1 generated by a stationary exponentially decaying kernel, our model can still approximate the kernel and recover the true conditional intensity without overfitting, which shows the adaptiveness of our model. Moreover, DNSK+Barrier enjoys outstanding performance gain when learning a diverse variety of complicated non-stationary kernels. The comparison between DNSK+Softplus and DNSK+Barrier proves that the model with log-barrier achieves a better recovery performance by maintaining the linearity of the conditional intensity. THP outperforms RMTPP in non-stationary cases but is still limited due to its pre-assumed parametric form of influence propagation. More results about kernel and intensity recovery can be found in Appendix C. 5.2 REAL DATA RESULTS Real data sets. We provide a comprehensive evaluation of our approach on several real-world data sets: we first use two popular data sets containing time-stamped events with categorical marks to demonstrate the robustness of DNSK+Barrier in marked STPPs (refer to Appendix B for detailed definition and kernel modeling): (i) Financial Transactions. (Du et al., 2016). This data set contains transaction records of a stock in one day with time unit milliseconds and the action (mark) of each transaction. We partition the events into different sequences by time stamps. (ii) StackOverflow (Leskovec and Krevl, 2014): The data is collected from the website StackOverflow over two years, containing reward records for users who promote engagement in the community. Each user’s reward history is treated as a sequence. Next, we demonstrate the practical versatility of the model using the following spatio-temporal data sets: (i) Southern California earthquake data provided by Southern California Earthquake Data Center (SCEDC) contains time and location information of earthquakes in Southern California. We collect 19,414 records from 1999 to 2019 with magnitude larger than 2.5 and partition the data into multiple sequences by month with average length of 40.2. (ii) Atlanta robbery & burglary data. Atlanta Police Department (APD) provides a proprietary data source for city crime. We extract 3420 reported robberies and 14958 burglaries with time and location from 2013 to 2019. Two crime types are preprocessed as separate data sets on a 10-day basis with average lengths of 13.7 and 58.7. Finally, the model’s ability to tackle high-dimensional marks is evaluated with Atlanta textual crime data. The proprietary data set provided by APD records 4644 crime incidents from 2016 to 2017 with time, location, and comprehensive text descriptions. The text information is preprocessed by TF-IDF technique, leading to a 5012-dimensional mark for each event. Table 3 summarizes the results of models dealing with categorical marks. Event time and type prediction are evaluated by Root Mean Square Error (RMSE) and accuracy, respectively. We can see that DNSK+Barrier outperforms the baselines in all prediction tasks by providing less time RMSE and higher type accuracy. For real-world spatio-temporal data, we report average predictive log-likelihood per event for the testing set since MRE is not applicable. Besides, we perform online prediction for earthquake data to demonstrate the model predicting ability. The probability density function f(t, s) which represents the conditional probability that the next event will occur at (t, s) given history Ht can be written as f(t, s) = λ(t, s) exp ( − ∫ S ∫ t tn λ(τ, ν)dτdν ) . The predicted time and location of the next event can be computed as E [tn+1|Ht] = ∫∞ tn t ∫ S f(t, s)dsdt, E [sn+1|Ht] = ∫ S s ∫∞ tn f(t, s)dtds. We predict the the time and location of the last event in each sequence. The mean absolute error (MAE) of the predictions is computed. The quantitative results in Table 4 show that DNSK+Barrier provides more accurate predictions than other alternatives with higher event log-likelihood. To demonstrate our model’s interpretability and power to capture heterogeneous data characteristics, we visualize the learned influence kernels and predicted conditional intensity for two crime categories in Figure 3. The first column shows kernel evaluations at fixed geolocation in downtown Atlanta which intuitively reflect the spatial influence of crimes in that neighborhood. The influence of a robbery in the downtown area is more intensive but regional, while a burglary which is hard to be responded to by police in time would impact a larger neighborhood along major highways of Atlanta. We also provide the predicted conditional intensity over space for two crimes. As we can observe, DNSK+Barrier captures the occurrence of events in regions with a higher crime rate, and crimes of the same category happening in different regions would influence their neighborhoods differently. We note that this example emphasizes the ability of the proposed method to recover data non-stationarity with different sequence lengths, and improve the limited model interpretability of other neural network-based methods (RMTPP, NH, and THP) in practice. For Atlanta textual crime data, we borrow the idea in Zhu and Xie (2022) by encoding the highly sparse TF-IDF representation into a binary mark vector with dimension d = 50 using Restricted Boltzmann Machine (RBM) (Fischer and Igel, 2012). The average testing log-likelihoods per event for each model are reported in Table 4. The results show that DNSK+Barrier outperforms PHP+exp in Zhu and Xie (2022) and NSMPP by achieving a higher testing log-likelihood. We visualize the basis functions of learned influence kernel by DNSK+Barrier in Figure A.4 in Appendix. 6 CONCLUSION We propose a deep non-stationary kernel for spatio-temporal point processes using a low-rank parameterization based on displacement, which enables the model to be further low-rank when learning complicated influence kernel and significantly reduces model complexity. The non-negativity of the intensity is guaranteed by a log-barrier method that maintains the linearity of the conditional intensity function. Based on that, we propose a computationally efficient strategy for model estimation. The superior performance of our model is demonstrated using synthetic and real data sets. ACKNOWLEDGEMENT The work is partially supported by NSF DMS-2134037. Z.D. and Y.X. are partially supported by an NSF CAREER CCF-1650913, and NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, and DMS-1830210. X.C. is partially supported by NSF and the Alfred P. Sloan Foundation. A ADDITIONAL METHODOLOGY DETAILS A.1 DERIVATION OF EQUATION 4 We denote τ := t−t′, ν := s−s′, the variables t′ ∈ [0, T ], τ ∈ [0, τmax], s′ ∈ S and ν ∈ B(0, amax), where the sets S, B(0, amax) ⊂ R2. Viewing the spatial and temporal variables, i.e., (t′, τ) and (s′, ν), as left and right mode variables, respectively, the kernel function SVD (Mollenhauer et al., 2020; Mercer, 1909) of k gives that k(t′, τ, s′, ν) = ∞∑ k=1 σkgk(t ′, τ)hk(s ′, ν). (A.1) We assume that the SVD can be truncated at k ≤ K with a residual of ε for some small ε > 0, and this holds as long as the singular values σk decay sufficiently fast. To fulfill the approximate finite-rank representation, it suffices to have the scalars σk and the functions gk and hk so that the expansion approximates the kernel k, even if they are not SVD of the kernel. This leads to the following assumption: Assumption A.1. There exist coefficients σk, and functions gk(t′, τ), hk(s′, ν) s.t. k(t′, τ, s′, ν) = K∑ k=1 σkgk(t ′, τ)hk(s ′, ν) +O(ε). (A.2) To proceed, one can apply kernel SVD again to gk and hk respectively, and obtain left and right singular functions that potentially differ for different k. Here, we impose that across k = 1, · · · ,K, the singular functions of gk are the same (as shown below, being approximately same suffices) set of basis functions, that is, gk(t ′, τ) = ∞∑ l=1 βk,lψl(t ′)φl(τ). As we will truncate l to be up to a finite rank again (up to an O(ε) residual) we require the (approximately) shared singular modes only up to L. Similarly as above, technically it suffices to have a finite-rank expansion to achieve the O(ε) error without requiring them to be SVD, which leads to the following assumption where we assume the same condition for hk: Assumption A.2. For the gk and hk in equation A.2, up to an O(ε) error, (i) The K temporal kernel functions gk(t′, τ) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients βkl, and functions ψl(t′), φl(τ) for l = 1, · · · , L, s.t. gk(t ′, τ) = L∑ l=1 βklψl(t ′)φl(τ) +O(ε), k = 1, · · · ,K. (A.3) (ii) The K spatial kernel functions hk(s′, ν) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients γkr, and functions ur(s′), vr(ν) for r = 1, · · · , R, s.t. hk(s ′, ν) = R∑ r=1 γkrur(t ′)vr(ν) +O(ε), r = 1, · · · , R. (A.4) Inserting equation A.3 and equation A.4 into equation A.2 gives the rank-truncated representation of the kernel function. Since K, L, R are fixed numbers, assuming boundedness of all the coefficients and functions, we have the representation with the final residual as O(ε), namely, k(t′, τ, s′, ν) = L∑ l=1 R∑ r=1 K∑ k=1 σkβklγkrψl(t ′)φl(τ)ur(t ′)vr(ν) +O(ε). Defining ∑K k=1 σkβklγkr as αlr leads to equation 4. A.2 ALGORITHMS Algorithm 1 Model parameter estimation Input: Training set X , batch size M , epoch number E, learning rate γ, constant a > 1 to update s in equation 6. Initialization: model parameter θ0, first epoch e = 0, s = s0. while e < E do for each batch with size M do 1. For 1D temporal point process, compute ℓ(θ), {λ(tct)}ct=1,...,|Ubar,t|. For spatio-temporal point process, compute ℓ(θ), {λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|. 2. Set b = min{λ(tct)}ct=1,...,|Ubar,t|−ϵ (or min{{λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|−ϵ), where ϵ is a small value to guarantee logarithm feasibility. 3. Compute L(θ) = −ℓ(θ) + 1wp(θ, b). 4. Update θe+1 ← θe − γ ∂L∂θe . 5. e← e+ 1, w ← w · a end for end while Algorithm 2 Synthetic data generation Input: Model λ(·), T,S, Upper bound of conditional intensity λ̄. Initialization: HT = ∅, t = 0, n = 0 while t < T do 1. Sample u ∼ Unif(0, 1). 2. t← t− lnu/λ̄. 3. Sample s ∼ Unif(S), D ∼ Unif(0, 1). 4. λ = λ(t, s|HT ). if Dλ̄ ≤ λ then n← n+ 1; tn = t, sn = s. HT ← HT ∪ {(tn, sn)}. end if end while if tn >= T then returnHT − {(tn, sn)} else returnHT end if A.3 GRID-BASED MODEL COMPUTATION In this section, we elaborate on the details of the grid-based efficient model computation. In Figure A.1, we visualize the procedure of computing the integrals of ∫ T−ti 0 φl(t)dt and ∫ S vr(s− si)ds in equation 8, respectively. Panel (a) illustrates the calculation of ∫ T−ti 0 φl(t)dt. As explained in Section 4.2, the evaluations of φl only happens on the grid Ut over [0, τmax] (since φl(t) = 0 when t > τmax). The value of F (t) = ∫ t 0 φl(τ)dτ on the grid can be obtained through numerical integration. Then given ti, the value of F (T − ti) = ∫ T−ti 0 φl(t)dt is calculated using linear interpolation of F on two adjacent grid points of T − ti. Panel (b) shows the computation of ∫ S vr(s− si)ds. Given si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds since vr(s) = 0 when s > amax. Then B(0, amax) is discretized into the grid Us, and ∫ S vr(s− si)ds can be calculated based on the value of vr on the grid points in Us ∩ S − si (the deep red dots in Figure A.1(b)) using numerical integration. To evaluate the sensitivity of our model to the chosen grids, we compare the performance of DNSK+Barrier on 3D Data set 2 using grids with different resolutions. The quantitative results of testing log-likelihood and intensity prediction error are reported in Table A.1. We use |Ut| = 50, |Us| = 1500 for the experiments in the main paper. As we can see, the model shows similar performances when a higher grid resolution is used and works slightly less accurately but still better than other baselines with less number of grid points. It reveals that our choice of grid resolution is accurate enough to capture the complex dynamics of event occurrences for this non-stationary data, and the model performance is robust to different grid resolutions. In practice, the grids can be flexibly chosen to reach the balance of model accuracy and computational efficiency. For instance, the number of uniformly distributed grid points along one dimension can be chosen around O(n0), where n0 is the average number of events in one observed sequence. Note that |Ut| or |Us| would be far less than the total number of observed events because we use thousands of sequences (2000 in our synthetic experiments) for model learning. And the grid size can be even smaller when it comes to non-Lebesgue-measured space. A.4 DETAILS OF COMPUTATIONAL COMPLEXITY We provide the detailed analysis of the O(n) computation complexity of L(θ) in Section 4.3 as following: • Computation of log-summation. The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R + L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. With the assumption that the conditional intensity is bounded by a constant C in a finite time horizon (Lewis and Shedler, 1979; Daley et al., 2003; Zhu et al., 2022), for each fixed j, the cardinality of set {(i, j) | tj < ti ≤ tj + τmax} is less than Cτmax, which leads to a O(RCτmaxn) complexity of {vr}Rr=1 evaluation. • Computation of integral. The integration of {φl}Ll=1 only relies on numerical operations of {φl}Ll=1 on grids Ut without extra evaluations of neural networks. The integration of {vr}Rr=1 depends on the evaluation on grid Us of O(R|Us|) complexity. • Computation of barrier. {φl}Ll=1 on grid Ubar,t is estimated by numerical interpolation of previously computed {φl}Ll=1 on grid Ut. Additional neural network evaluations of {vr}Rr=1 cost no more than O(RCτmaxn) complexity. B DEEP NON-STATIONARY KERNEL FOR MARKED STPPS In marked STPPs (Reinhart, 2018), each observed event is associated with additional information describing event attribute, denoted as m ∈M ⊂ RdM . LetH = {(ti, si,mi)}ni=1 denote the event sequence. Given the observed history Ht = {(ti, si,mi) ∈ H|ti < t}, the conditional intensity function of a marked STPPs is similarly defined as: λ (t, s,m) = lim ∆t↓0,∆s↓0,∆m↓0 E [N([t, t+∆t]×B(s,∆s)×B(m,∆m)) | Ht] |B(s,∆s)||B(m,∆m)|∆t , where B(m,∆m) is a ball centered at m ∈ RdM with radius ∆m. The log-likelihood of observing H on [0, T ]× S ×M is given by ℓ(H) = n∑ i=1 log λ (ti, si,mi)− ∫ T 0 ∫ S ∫ M λ(t, s,m)dmdsdt. B.1 KERNEL INCORPORATING MARKS One of the salient features of our spatio-temporal kernel framework is that it can be conveniently adopted in modeling marked STPPs with additional sets of mark basis functions {gq, hq}Qq=1. We modify the influence kernel function k accordingly as following: k(t′, t− t′, s′, s− s′,m′,m) = Q∑ q=1 R∑ r=1 L∑ l=1 αlrqψl(t ′)φl(t− t′)ur(s′)vr(s− s′)gq(m′)hq(m). Here m′,m ∈M ⊂ RdM and {gq, hq :M→ R, q = 1, . . . , Q} represented by independent neural networks model the influence of historical mark m′ and current mark m, respectively. Since the mark spaceM is always categorical and the difference between m′ and m is of little practical meaning, we use gq and hq to model m′ and m separately instead of modeling m−m′. B.2 LOG-BARRIER AND MODEL COMPUTATION The conditional intensity for marked spatio-temporal point processes at (t, s,m) can be written as: λ(t, s,m) = µ+ ∑ l,r,q αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi)hq(m). We need to guarantee the non-negativity of λ over the space of [0, T ] × S ×M. When the total number of unique categorical mark inM is small, the log-barrier can be conveniently computed as the summation of λ on grids Ubar,t × Ubar,s ×M. In the following we focus on the case thatM is high-dimensional with O(n) number of unique marks. For model simplicity we use non-negative gq and hq in this case (which can be done by adding a non-negative activation function to the linear output layer in neural networks). We re-write λ(t, s,m) and denote as following: λ(t, s,m) = µ+ ∑ q ∑ l,r αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi) ︸ ︷︷ ︸ F̂q(t,s) hq(m). Note that the function in the brackets are only with regard to t, s. We denote it as F̂q(t, s) (since it is in the rth rank of mark). Since hq(m) ≥ 0, the non-negativity of λ can be guaranteed by the non-negativity of F̂q(t, s). Thus we apply log-barrier method on F̂q(t, s). The log-barrier term becomes: p(θ, b) := − 1 Q|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 Q∑ q=1 log(F̂q(tct , scs)− b), Since our model is low-rank, the value of Q will not be large. For the model computation, the additional evaluations for {gq}Qq=1 on events is ofO(Qn) complexity and the evaluations for {hq}Qq=1 only depends on the unique number of marks which at most of O(n). The log-barrier method does not introduce extra evaluation in mark space. Thus the overall computation complexity for DNSK in marked STPPs is still O(n). C ADDITIONAL EXPERIMENTAL RESULTS In this section we provide details of data sets and experimental setup, together with additional experimental results. Synthetic data sets. To show the robustness of our model, we generate three temporal data sets and three spatio-temporal data sets using the following kernels: (i) 1D Data set 1 with exponential kernel: k(t′, t) = 0.8e−(t−t ′). (ii) 1D Data set 2 with non-stationary kernel: k(t′, t) = 0.3(0.5 + 0.5 cos(0.2t′))e−2(t−t ′). (iii) 1D Data set 3 with infinite rank kernel: k(t′, t) = 0.3 ∞∑ j=1 2−j ( 0.3 + cos(2 + ( t′ 5 )0.71.3(j + 1)π) ) e− 8(t−t′)2 25 j 2 (iv) 2D Data set 1 with exponential kernel: k(t′, t, s′, s) = 0.5e−1.5(t−t ′)e−0.8s ′ . (v) 3D Data set 1 with non-stationary inhibition kernel: k(t′, t, s′, s) = 0.3(1− 0.01t)e−2(t−t ′) 1 2πσ2s′ e − ∥s ′∥2 2σ2 s′ cos (10∥s− s′∥) 2πσ2s(1 + e 10(∥s−s′∥−0.5) e − ∥s−s ′∥2 2σ2s , where σs′ = 0.5, σs = 0.15. (vi) 3D Data set 2 with non-stationary mixture kernel: k(t′, t, s′, s) = 2∑ r=1 2∑ l=1 αrlur(s ′)vr(s− s′)ψl(t′)φl(t− t′) , where u1(s′) = 1−as(s′2+1), u2(s′) = 1−bs(s′2+1), v1(s−s′) = 12πσ21 e − ∥s−s ′∥2 2σ21 , v2(s− s′) = 1 2πσ22 e − ∥s−s ′−0.8∥2 2σ22 , ψ1(t ′) = 1 − att′, ψ2(t′) = 1 − btt′, φ1(t − t′) = e−β(t−t ′), φ2(t− t′) = (t− t′− 1) · I(t− t′ < 3), and as = 0.3, bs = 0.4, at = 0.02, bt = 0.02, σ1 = 0.2, σ2 = 0.3, β = 2, (α11, α12, α21, α22) = (0.6, 0.15, 0.225, 0.525). Note that kernel (iii) is the one we illustrated in Figure 1, which is of infinite rank according to the formulas. In Figure 1, the value matrix of k(t′, t) and k(t′, t − t′) are the kernel evaluations on a same 300× 300 uniform grid. As we can see, the rank of the value matrix of the same kernel k is reduced from 298 to 7 after changing to the displacement-based kernel parameterization. Details of Experimental setup. For RMTPP and NH we test embedding size of {32, 64, 128} and choose 64 for experiments. For THP we take the default experiment setting recommended by Zuo et al. (2020). For NSMPP we use the same model setting in Zhu et al. (2022) with rank 5. Each experiment is implemented by the following procedure: Given the data set, we split 90% of the sequences as training set and 10% as testing set. We use independent fully-connected neural networks with two-hidden layers for each basis function. Each layer contains 64 hidden nodes. The temporal rank of DNSK+Barrier is set to be 1 for synthetic data (i), (ii), (iv), (v), 2 for (vi), and 3 for (iii). The spatial rank is 1 for synthetic data (iv), (v) and 2 for (vi). The temporal and spatial rank for real data are both set to be 2 through cross validation. For each real data set, the τmax is chosen to be around T/4 and smax is 1 for each data set since the location space is normalized before training. The hyper-parameter of DNSK+Softplus are the same as DNSK+Barrier. For RMTPP, NH, and THP the batch size is 32 and the learning rate is 10−3. For others, the batch size is 64 and the learning rate is 10−1. The quantitative results are collected by running each experiment for 5 independent times. All experiments are implemented on Google Colaboratory (Pro version) with 25GB RAM and a Tesla T4 GPU. C.1 SYNTHETIC RESULTS WITH 2D & 3D KERNEL In this section we present additional experiment results for the synthetic data sets with 2D exponential and 3D non-stationary mixture kernel. Our proposed model successfully recovers the kernel and event conditional intensity in both case. Note that the recovery of 3D mixture kernel demonstrates the capability of our model to handle complex event dependency with mixture patterns by conveniently setting time and mark rank to be more than 1. C.2 ATLANTA TEXTUAL CRIME DATA WITH HIGH-DIMENSIONAL MARKS Figure A.4 visualizes the fitting and prediction results of DNSK+Barrier. Our model presents an decaying pattern in temporal effect and captures two different patterns of spatial influence for incidents in the northeast. Besides, the in-sample and out-of-sample intensity predictions demonstrate the ability of DNSK to characterize the event occurrences by showing different conditional intensities.
1. What is the focus and contribution of the paper on temporal/spatio-temporal point processes? 2. What are the strengths of the proposed approach, particularly in terms of kernel functions and positivity ensurance? 3. What are the weaknesses of the paper regarding additional empirical studies and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions raised by the reviewer regarding the background and applications of the proposed method?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper the paper proposes a more general form of kernel function that is typically used in temporal/spatio-temporal point process, by considering an absolute time-dependent component in addition to the relative spatial-time inputs. In addition, authors made another contribution by proposing a more efficient approach to ensure the positivity of intensity in the form log-barrier to the optimization problem. Empirical results show both accuracy and efficiency gain over baselines. Strengths And Weaknesses Strength: a new kernel method is proposed by considering absolute time complexity analysis is provided empirical evaluation performance is strong Weakness: additional empirical study would provide more understanding of the method. Other comments: A baseline RNN model with the s and t as an input to RNN, which models time-dependent change in function and similar to your base kernel, should be considered. It would highlight the necessity of using kernel. Background does the spatial aspect of the STPP indicate that the change in location always be positive in location? what's an example for such application? does the grid resolution have impact on the learning of the kernel? would a large t value, corresponding to long term dependency, pose any numerical issue ? Eq4: it would be helpful if there is some formal statement on assumptions, the generalization result or its expressivity related to Eq 3. Table 3: how does DNSK+softplus compare with transformer results? Would using barrier improve THP as well? Clarity, Quality, Novelty And Reproducibility quality: good clarity: good originality: good
ICLR
Title Spatio-temporal point processes with deep non-stationary kernels Abstract Point process data are becoming ubiquitous in modern applications, such as social networks, health care, and finance. Despite the powerful expressiveness of the popular recurrent neural network (RNN) models for point process data, they may not successfully capture sophisticated non-stationary dependencies in the data due to their recurrent structures. Another popular type of deep model for point process data is based on representing the influence kernel (rather than the intensity function) by neural networks. We take the latter approach and develop a new deep non-stationary influence kernel that can model non-stationary spatio-temporal point processes. The main idea is to approximate the influence kernel with a novel and general low-rank decomposition, enabling efficient representation through deep neural networks and computational efficiency and better performance. We also take a new approach to maintain the non-negativity constraint of the conditional intensity by introducing a log-barrier penalty. We demonstrate our proposed method’s good performance and computational efficiency compared with the state-of-the-art on simulated and real data. 1 INTRODUCTION Point process data, consisting of sequential events with timestamps and associated information such as location or category, are ubiquitous in modern scientific fields and real-world applications. The distribution of events is of great scientific and practical interest, both for predicting new events and understanding the events’ generative dynamics (Reinhart, 2018). To model such discrete events in continuous time and space, spatio-temporal point processes (STPPs) are widely used in a diverse range of domains, including modeling earthquakes (Ogata, 1988; 1998), the spread of infectious diseases (Schoenberg et al., 2019; Dong et al., 2021), and wildfire propagation (Hering et al., 2009). A modeling challenge is to accurately capture the underlying generative model of event occurrence in general spatio-temporal point processes (STPP) while maintaining the model efficiency. Specific parametric forms of conditional intensity are proposed in seminal works of Hawkes process (Hawkes, 1971; Ogata, 1988) to tackle the issue of computational complexity in STPPs, which requires evaluating the complex multivariate integral in the likelihood function. They use an exponentially decaying influence kernel to measure the influence of a past event over time and assume the influence of all past events is positive and linearly additive. Despite computational simplicity (since the integral of the likelihood function is avoided), such a parametric form limits the model’s practicality in modern applications. Recent models use neural networks in modeling point processes to capture complicated event occurrences. RNN (Du et al., 2016) and LSTM (Mei and Eisner, 2017) have been used by taking advantage of their representation power and capability in capturing event temporal dependencies. However, the recurrent structures of RNN-based models cannot capture long-range dependency (Bengio et al., 1994) and attention-based structure (Zhang et al., 2020; Zuo et al., 2020) is introduced to address such limitations of RNN. Despite much development, existing models still cannot sufficiently capture spatio-temporal non-stationarity, which are common in real-world data (Graham et al., 2013; Dong et al., 2021). Moreover, while RNN-type models may produce strong prediction performance, the models consist of general forms of network layers and the modeling power relies on the hidden states, thus often not easily interpretable. A promising approach to overcome the above model restrictions is point process models that combine statistical models with neural network representation, such as Zhu et al. (2022) and Chen et al. (2020), to enjoy both the interpretability and expressive power of neural networks. In particular, the idea is to represent the (possibly non-stationary) influence kernel based on a spectral decomposition and represent the basis functions using neural networks. However, the prior work (Zhu et al., 2022) is not specifically designed for non-stationary kernel and the low-rank representation can be made significantly more efficient, which is the main focus of this paper. Contribution. In this paper, we develop a non-stationary kernel (referred to as DNSK) for (possibly non-stationary) spatio-temporal processes that enjoy efficient low-rank representation, which leads to much improved computational efficiency and predictive performance. The construction is based on an interesting observation that by reparameterize the influence kernel from the original form of k(t′, t), (where t′ is the historical even time, and t is the current time) to an equivalent form k(t′, t − t′) (which thus is parameterized by the displacement t− t′ instead), the rank can be reduced significantly, as shown in Figure 1. This observation inspired us to design a much more efficient representation of the non-stationary point processes with much fewer basis functions to represent the same kernel. In summary, the contributions of our paper include • We introduce an efficient low-rank representation of the influence kernel based on a novel “dis- placement” re-parameterization. Our representation can well-approximate a large class of general non-stationary influence kernels and is generalizable to spatio-temporal kernels (also potentially to data with high-dimensional marks). Efficient representation leads to lower computational cost and better prediction power, as demonstrated in our experiments. • In model fitting, we introduce a log-barrier penalty term in the objective function to ensure the non-negative conditional intensity function so the model is statistically meaningful, and the problem is numerically stable. This approach also enables the model to learn general influence functions (that can have negative values), which is a drastic improvement from existing influence kernel-based methods that require the kernel functions to be non-negative. • Using extensive synthetic and real data experiments, we show the competitive performance of our proposed methods in both model recovery and event prediction compared with the state-of-the-art, such as the RNN-based and transformer-based models. 1.1 RELATED WORKS The original work of A. Hawkes (Hawkes, 1971) provides classic self-exciting point processes for temporal events, which express the conditional intensity function with an influence kernel and a base rate. Ogata (1998) proposes a parametric form of spatio-temporal influence kernel which enjoys strong model interpretability and efficiency. However, such simple parametric forms own limited expressiveness in characterizing the complex events’ dynamic in modern applications (Zhu et al., 2021; Liao et al., 2022). Neural networks have been widely adopted in point processes (Xiao et al., 2017; Chen et al., 2020; Zhu et al., 2021). Du et al. (2016) incorporates recurrent neural networks and Mei and Eisner (2017) use a continuous-time invariant of LSTM to model event influence with exponential decay over time. These RNN-based models may be unable to capture complicated event dependencies due to the recurrent structure. Zhang et al. (2020); Zuo et al. (2020) introduce self-attentive structures into point processes for their capability to memorize long-term influence by dealing with an event sequence as a whole. The main limitation is that they assume a dot-product-based score function and assume linearly decaying of event influence. Omi et al. (2019) propose a fully-connected neural network to model the cumulative intensity function to go beyond parametric decaying influence. However, the event embeddings are still generated by RNN, and fitting cumulative intensity function by neural networks lacks model interpretability. Note that all the above models tackle temporal events with categorical marks, which are inapplicable in continuous time and location space. Recent works adopt neural networks in learning the influence kernel function. The kernel introduced in Okawa et al. (2021) uses neural networks to model the latent dynamic of time interval but still assumes an exponentially decaying influence over time. Zhu et al. (2022) proposes a kernel representation using spectral decomposition and represents feature functions using deep neural networks to harvest powerful model expressiveness when dealing with marked event data. Our method considers an alternative novel kernel representation that allows the general kernel to be expressed further low-rankly. 2 BACKGROUND Spatio-temporal point processes (STPPs) (Reinhart, 2018; Moller and Waagepetersen, 2003) have been widely used to model sequences of random events that happen in continuous time and space. Let H = {(ti, si)}ni=1 denote the event stream with time ti ∈ [0, T ] ⊂ R and location si ∈ S ⊂ RdS of ith event. The event number n is also random. Given the observed historyHt = {(ti, si) ∈ H|ti < t} before time t, an STPP is then fully characterized by the conditional intensity function λ (t, s | Ht) = lim ∆t↓0,∆s↓0 E [N([t, t+∆t]×B(s,∆s)) | Ht] |B(s,∆s)|∆t , (1) where B(s,∆s) is a ball centered at s ∈ RdS with radius ∆s, and the counting measure N is defined as the number of events occurring in [t, t + ∆t] × B(s,∆s) ⊂ RdS+1. Naturally λ (t, s|Ht) ≥ 0 for any arbitrary t and s. In the following, we omit the dependency on historyHt and use common shorthand λ(t, s). The log-likelihood of observingH on [0, T ]× S is given by (Daley et al., 2003) ℓ(H) = n∑ i=1 log λ (ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt (2) Neural point processes parameterize the conditional intensity function by taking advantage of recurrent neural networks (RNNs). In Du et al. (2016), an input vector xi which extracts the information of event ti and the associated event attributes mi (can be event mark or location) is fed into the RNN. A hidden state vector hi is updated by hi = ρ(hi−1,xi), where ρ(·) is a mapping fulfilled by recurrent neural network operations. The conditional intensity function on (ti, ti+1] is then defined as λ(t) = δ(t,hi), where δ is an exponential transformation that guarantees a positive intensity. In Mei and Eisner (2017) the RNN is replaced by a continuous-time LSTM module with hidden states h(t) defined on [0, T ] and a Softplus function δ. Attention-based models are introduced in Zuo et al. (2020); Zhang et al. (2020) to overcome the inability of RNNs to capture sophisticated event dependencies due to their recurrent structures. Hawkes process (Hawkes, 1971) is a well-known generalized point process model. Assuming the influences from past events are linearly additive, the conditional intensity function takes the form of λ(t, s) = µ+ ∑ (t′,s′)∈Ht k(t′, t, s′, s), (3) where k is an influence kernel function that captures event interactions. Commonly the kernel function is assumed to be stationary, that is, k only depends on t − t′ and s − s′, which limits the model expressivity. In this work, we aim to capture complicated non-stationarity in spatio-temporal event dependencies by leveraging the strong approximation power of neural networks in kernel fitting. 3 LOW-RANK DEEP NON-STATIONARY KERNEL Due to the intricate dependencies between events, it is challenging to choose the form of kernel function that achieves great model expressiveness while enjoying high model efficiency. In this section, we introduce a unified model with a low-rank deep non-stationary kernel to capture the complex heterogeneity in events’ influence over spatio-temporal space. 3.1 KERNEL WITH HISTORY AND SPATIO-TEMPORAL DISPLACEMENT For the influence kernel function k(t′, t, s′, s), by using the displacements in t and s as variables, we first re-parameterize the kernel as k(t′, t−t′, s′, s−s′), where the minus in s−s′ refers to element-wise difference between s and s′ when dS > 1. Then we achieve a finite-rank decomposed representation based on (truncated) singular value decomposition (SVD) for kernel functions (Mollenhauer et al., 2020) (which can be understood as the kernel version of matrix SVD, where the eigendecomposition is based on Mercer’s Theorem (Mercer, 1909)), and that the decomposed spatial (and temporal) kernel functions can be approximated under shared basis functions (cf. Assumption A.2). The resulting approximate finite-rank representation is written as (details are in Appendix A.1) k(t′, t− t′, s′, s− s′) = R∑ r=1 L∑ l=1 αlrψl(t ′)φl(t− t′)ur(s′)vr(s− s′). (4) Here {ψl, φl : [0, T ]→ R, l = 1, . . . , L} are two sets of temporal basis functions that characterize the temporal influence of event at t′ and the decaying effect brought by elapsed time t− t′. Similarly, spatial basis functions {ur, vr : S → R, r = 1, . . . , R} capture the spatial influence of event at s′ and the decayed influence after spreading over the displacement of s − s′. The corresponding weight αlr at different spatio-temporal ranks combines each set of basis functions into a weighted summation, leading to the final expression of influence kernel k. To further enhance the model expressiveness, we use a fully-connected neural network to represent each basis function. The history or displacement is taken as the input and fed through multiple hidden layers equipped with Softplus non-linear activation function. To allow for inhibiting influence from past events (negative value of influence kernel k), we use a linear output layer for each neural network. For an influence kernel with temporal rank L and spatial rank R, we need 2(L + R) independent neural networks for modeling. The benefits of our proposed kernel framework lies in the following: (i) The kernel parameterization with displacement significantly reduces the rank needed when representing the complicated kernel encountered in practice as shown in Figure 1. (ii) The non-stationarity of original influence of historical events over spatio-temporal space can be conveniently captured by in-homogeneous {ψl}Ll=1, {ur}Rr=1, making the model applicable in modeling general STPPs. (iii) The propagating patterns of influence are characterized by {φl}Ll=1, {vr}Rr=1 which go beyond simple parametric forms. In particular, when the events’ influence has finite range, i.e. there exist τmax and amax such that the influence decays to zero if |t− t′| > τmax or ||s− s′|| > amax, we can restrict the parameterization of {φl}Ll=1 and {vr}Rr=1 on a local domain [0, τmax] × B(0, amax) instead of [0, T ] × S, which further reduce the model complexity. Details of choosing kernel and neural network architectures are described in Appendix C. Remark 1 (the class of influence kernel expressed). The proposed deep kernel representation covers a large class of non-stationary kernels generally used in STPPs. In particular, the proposed form of the kernel does not need to be positive semi-definite or even symmetric (Reinhart, 2018). The low-rank decomposed formulation equation 4 is of SVD-type (cf. Appendix A.1). While each φl (and vr) can be viewed as stationary (i.e., shift-invariant), the combination with left modes in the summation enables to model spatio-temporal non-stationarity. The technical assumptions A.1 and A.2 do not require more than the existence of a low-rank decomposition motivated by kernel SVD. As long as the 2(R+ L) many functions ψl, φl, and ur, vr are sufficiently regular, they can be approximated and learned by a neural network. The universal approximation power of neural networks enables our framework to express a broad range of general kernel functions, and the low-rank decomposed form reduces the modeling of a spatio-temporal kernel to finite many functions on time and space domains (the right modes are on truncated domains), respectively. 4 EFFICIENT COMPUTATION OF MODEL We consider model optimization through Maximum likelihood estimation (MLE) (Reinhart, 2018). The resulting conditional intensity function could now be negative by allowing inhibiting historical influence. A common approach to guarantee the non-negativity is to adopt a nonlinear positive activation function in the conditional intensity (Du et al., 2016; Zhu et al., 2022). However, the integral of such a nonlinear intensity over spatio-temporal space is computationally expensive. To tackle this, we first introduce a log-barrier to the MLE optimization problem to guarantee the non-negativity of conditional intensity function λ and maintain its linearity. Then we provide a computationally efficient strategy that benefits from the linearity of the conditional intensity. The extension of the approach to point process data with marks is given in Appendix B. 4.1 MODEL OPTIMIZATION WITH LOG-BARRIER We re-denote ℓ(H) in equation 2 by ℓ(θ) in terms of model parameter θ. The constrained MLE optimization problem for model parameter estimation can be formulated as: min θ −ℓ(θ), s.t.− λ(t, s) ≤ 0, ∀t ∈ [0, T ],∀s ∈ S. Introduce a log-barrier method (Boyd et al., 2004) to ensure the non-negativity of λ, and penalize the values of λ on a dense enough grid Ubar,t × Ubar,s ⊂ [0, T ]× S . The log-barrier is defined as p(θ, b) := − 1 |Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (5) where ct, cs indicate the index of the gird, and b is a lower bound of conditional intensity function on the grid to guarantee the feasibility of logarithm operation. The MLE optimization problem can be written as min θ L(θ) := −ℓ(θ) + 1 w p(θ, b) = − ( n∑ i=1 log λ(ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt ) − 1 w|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (6) where w is a weight to control the trade-off between log-likelihood and log-barrier; w and b can be set accordingly during the learning procedure. Details can be found in Appendix A.2. Note that previous works (Du et al., 2016; Mei and Eisner, 2017; Pan et al., 2021; Zuo et al., 2020; Zhu et al., 2022) use a scaled positive transformation to guarantee non-negativity conditional intensity function. Compared with them, the log-barrier method preserves the linearity of the conditional intensity function. As shown in Table 1, such a log-barrier method enables efficient model computation (See more details in Section 4.2) and enhance the model recovery power. 4.2 MODEL COMPUTATION The log-likelihood computation of general STPPs (especially those with general influence function) is often difficult and requires numerical integral and thus time-consuming. Given a sequence of events {xi = (ti, si)}ni=1 of number n, the complexity of neural network evaluation is of O(n2) for the term of log-summation and of O(Kn) (K ≫ n) when using numerical integration for the double integral term with K sampled points in a multi-dimensional space. In the following, we circumvent the calculation difficulty by proposing an efficient computation for L(θ) with complexity O(n) of neural network evaluation through a domain discretization strategy. Computation of log-summation. The first log-summation term in equation 2 can be written as: n∑ i=1 log λ(ti, si) = n∑ i=1 log µ+ ∑ tj<ti R∑ r=1 L∑ l=1 αlrψl(tj)φl(ti − tj)ur(sj)vr(si − sj) . (7) Note that each ψl only needs to be evaluated at event time {ti}ni=1 and each ur is evaluated at all the event location {si}ni=1. To avoid the redundant evaluations of φl over every pair of (ti, tj), we set up a uniform grid Ut over time horizon [0, τmax] and evaluate φl on the grid. The value of φl(tj − ti) can be obtained by linear interpolation with values on two adjacent grid points of tj − ti. By doing so, we only need to evaluate φl for |Ut| times on the grids. Note that φl can be simply feed with 0 when tj − ti > τmax without any neural network evaluation. Here we directly evaluate vr(si − sj) since numerical interpolation is less accurate in location space. Note that one does not need to evaluate every pair of index (i, j). Instead, we have I := {(i, j) | vr(si − sj) will be computed} = {(i, j) | tj < ti ≤ tj + τmax} ∩ {(i, j) | ∥si − sj∥ ≤ amax}. We use 0 to other pairs of (i, j). Computation of integral. A benefit of our approach is that we avoid numerical integration for the conditional intensity function (needed to evaluate the likelihood function), since the design of the kernel allows us to decompose the desired integral to integrating basis functions. Specifically, we have ∫ T 0 ∫ S λ(t, s)dsdt = µ|S|T + n∑ i=1 ∫ T 0 ∫ S I(ti < t)k(ti, t, si, s)dsdt = µ|S|T + n∑ i=1 R∑ r=1 ur(si) ∫ S vr(s− si)ds L∑ l=1 αrlψl(ti) ∫ T−ti 0 φl(t)dt. (8) To compute the integral of φl, we take the advantage of the pre-computed φl on the grid Ut. Let Fl(t) := ∫ t 0 φl(τ)dτ . Then Fl(T − ti) can be computed by the linear interpolation of values of Fl at two adjacent grid points (in Ut) of T − ti. In particular, Fl evaluated on Ut equals to the cumulative sum of φl divided by the grid width. The integral of vr can be estimated based on a grid Us in B(0, amax) ⊂ RdS since it decays outside the ball. For each si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds, where S − si := {s ′ | s′ = s− si, s ∈ S}. Thus the integral is well estimated with the evaluations of vr on grid set Us ∩ S − si. Note that in practice we only evaluate vr on Us once and use subsets of the evaluations for different si. More details about grid-based computation can be found in Appendix A.3. Computation of log-barrier. The barrier term p(θ, b) is calculated in a similar way as equation 7 by replacing (ti, si, µ) with (tct , scs , µ− b), i.e. we use interpolation to calculate φl(tct − tj) and evaluate vr on a subset of {(scs , sj)}, cs = 1, . . . , |Ubar,s|, j = 1, . . . , n. 4.3 COMPUTATIONAL COMPLEXITY The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R+ L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. The evaluation of {vr}Rr=1 costs no more thanO(RCτmaxn)+O(R|Us|) complexity. We note that L,R, τmax, |Ut|, |Us| are all constant that much less than event number n, thus the overall computation complexity will beO(n). We compare the model training time per epoch for a baseline equipped with a softplus activation function (NSMPP) and our model with log-barrier method (DNSK+Barrier) on a 1D synthetic data set and a 3D synthetic data set. The quantitative results in Table 1 demonstrates the efficiency improvement of our model by using log-barrier technique. More details about the computation complexity analysis can be found in Appendix A.4. 5 EXPERIMENT We use large-scale synthetic and real data sets to demonstrate the superior performance of our model and present the results in this section. Experimental details and results can be found in Appendix C. Codes will be released upon publication. Baselines. We compare our method (DNSK+Barrier) with: (i) RMTPP (RMTPP) (Du et al., 2016); (ii) Neural Hawkes (NH) (Mei and Eisner, 2017); (iii) Transformer Hawkes process (THP) (Zuo et al., 2020); (iv) Parametric Hawkes process (PHP+exp) with exponentially decaying spatiotemporal kernel; (v) Neual spectral marked point processes (NSMPP) (Zhu et al., 2022); (vi) DNSK without log-barrier but with a non-negative Softplus activation function (DNSK+Softplus). We note that RMTPP, NH and THP directly model conditional intensity function using neural networks while others learn the influence kernel in the framework of equation 3. In particular, NSMPP designs the kernel based on singular value decomposition but parameterizes it without displacement. The model parameters are estimated using the training data via Adam optimization method (Kingma and Ba, 2014). Details of training can be found in Appendix A.2 and C. 5.1 SYNTHETIC DATA EXPERIMENTS Synthetic data sets. To show the effectiveness of DNSK+Barrier, we conduct all the models on three temporal data sets and three spatio-temporal data sets generated by following true kernels: (i) 1D exponential kernel (ii) 1D non-stationary kernel; (iii) 1D infinite rank kernel; (iv) 2D exponential kernel; (v) 3D non-stationary inhibition kernel; (vi) 3D non-stationary mixture kernel. Data sets are generated using thinning algorithm in Daley and Vere-Jones (2008). Each data set is composed of 2000 sequences. Details of kernel formulas and data generation can be found in Appendix C. We consider two performance metrics for testing data evaluation: Mean relative error (MRE) of the predicted intensity and log-likelihood. The true and predicted λ∗(x), λ̂(x) can be calculated using equation 4 with true and learned kernel. The MRE for one test trajectory is defined as∫ X |λ ∗(x)− λ̂(x)|/λ∗(x)dx and the averaged MRE over all test trajectories is reported. The loglikelihood for observing each testing sequence can be computed according to equation 2, and the average predictive log-likelihood per event is reported. The log-likelihood shows the model’s goodness-of-fit, and the intensity evaluation further reflects the model’s ability to recover the underlying mechanism of event occurrence and predict the future. The heat maps in Figure 2 visualize the results of non-stationary kernel recovery for DNSK+Barrier and NSMPP on 1D Data set 2 and 3 (The true kernel used in 1D Data set 3 is the one in Figure 1). DNSK+Barrier recovers the true kernel more accurately than NSMPP, indicating the strong representation power of the low-rank kernel parameterization with displacements. Line charts in Figure 2 present the recovered intensities with the true ones (dark grey curves). It demonstrates that our method can accurately capture the temporal dynamics of events. In particular, the average conditional intensity λ over multiple testing sequences shows the model’s ability to recover data non-stationarity over time. While DNSK+Barrier successfully captures the non-stationarity among data, both RMTPP and NH fail to do so by showing a flat curve of the averaged intensity. Note that THP with positional encoding recovers the data non-stationarity (as shown in two figures in the last column). However, our method still outperforms THP which suffers from limited model expressiveness when complicated propagation of event influence is involved (see two figures in the penultimate column). Tabel 2 summarized the quantitative results of testing log-likelihood and MRE. It shows that DNSK+Barrier has superior predictive performance against baselines in characterizing the dynamics of data generation in spatio-temporal space. Specifically, with evidently over-parameterization for 1D Data set 1 generated by a stationary exponentially decaying kernel, our model can still approximate the kernel and recover the true conditional intensity without overfitting, which shows the adaptiveness of our model. Moreover, DNSK+Barrier enjoys outstanding performance gain when learning a diverse variety of complicated non-stationary kernels. The comparison between DNSK+Softplus and DNSK+Barrier proves that the model with log-barrier achieves a better recovery performance by maintaining the linearity of the conditional intensity. THP outperforms RMTPP in non-stationary cases but is still limited due to its pre-assumed parametric form of influence propagation. More results about kernel and intensity recovery can be found in Appendix C. 5.2 REAL DATA RESULTS Real data sets. We provide a comprehensive evaluation of our approach on several real-world data sets: we first use two popular data sets containing time-stamped events with categorical marks to demonstrate the robustness of DNSK+Barrier in marked STPPs (refer to Appendix B for detailed definition and kernel modeling): (i) Financial Transactions. (Du et al., 2016). This data set contains transaction records of a stock in one day with time unit milliseconds and the action (mark) of each transaction. We partition the events into different sequences by time stamps. (ii) StackOverflow (Leskovec and Krevl, 2014): The data is collected from the website StackOverflow over two years, containing reward records for users who promote engagement in the community. Each user’s reward history is treated as a sequence. Next, we demonstrate the practical versatility of the model using the following spatio-temporal data sets: (i) Southern California earthquake data provided by Southern California Earthquake Data Center (SCEDC) contains time and location information of earthquakes in Southern California. We collect 19,414 records from 1999 to 2019 with magnitude larger than 2.5 and partition the data into multiple sequences by month with average length of 40.2. (ii) Atlanta robbery & burglary data. Atlanta Police Department (APD) provides a proprietary data source for city crime. We extract 3420 reported robberies and 14958 burglaries with time and location from 2013 to 2019. Two crime types are preprocessed as separate data sets on a 10-day basis with average lengths of 13.7 and 58.7. Finally, the model’s ability to tackle high-dimensional marks is evaluated with Atlanta textual crime data. The proprietary data set provided by APD records 4644 crime incidents from 2016 to 2017 with time, location, and comprehensive text descriptions. The text information is preprocessed by TF-IDF technique, leading to a 5012-dimensional mark for each event. Table 3 summarizes the results of models dealing with categorical marks. Event time and type prediction are evaluated by Root Mean Square Error (RMSE) and accuracy, respectively. We can see that DNSK+Barrier outperforms the baselines in all prediction tasks by providing less time RMSE and higher type accuracy. For real-world spatio-temporal data, we report average predictive log-likelihood per event for the testing set since MRE is not applicable. Besides, we perform online prediction for earthquake data to demonstrate the model predicting ability. The probability density function f(t, s) which represents the conditional probability that the next event will occur at (t, s) given history Ht can be written as f(t, s) = λ(t, s) exp ( − ∫ S ∫ t tn λ(τ, ν)dτdν ) . The predicted time and location of the next event can be computed as E [tn+1|Ht] = ∫∞ tn t ∫ S f(t, s)dsdt, E [sn+1|Ht] = ∫ S s ∫∞ tn f(t, s)dtds. We predict the the time and location of the last event in each sequence. The mean absolute error (MAE) of the predictions is computed. The quantitative results in Table 4 show that DNSK+Barrier provides more accurate predictions than other alternatives with higher event log-likelihood. To demonstrate our model’s interpretability and power to capture heterogeneous data characteristics, we visualize the learned influence kernels and predicted conditional intensity for two crime categories in Figure 3. The first column shows kernel evaluations at fixed geolocation in downtown Atlanta which intuitively reflect the spatial influence of crimes in that neighborhood. The influence of a robbery in the downtown area is more intensive but regional, while a burglary which is hard to be responded to by police in time would impact a larger neighborhood along major highways of Atlanta. We also provide the predicted conditional intensity over space for two crimes. As we can observe, DNSK+Barrier captures the occurrence of events in regions with a higher crime rate, and crimes of the same category happening in different regions would influence their neighborhoods differently. We note that this example emphasizes the ability of the proposed method to recover data non-stationarity with different sequence lengths, and improve the limited model interpretability of other neural network-based methods (RMTPP, NH, and THP) in practice. For Atlanta textual crime data, we borrow the idea in Zhu and Xie (2022) by encoding the highly sparse TF-IDF representation into a binary mark vector with dimension d = 50 using Restricted Boltzmann Machine (RBM) (Fischer and Igel, 2012). The average testing log-likelihoods per event for each model are reported in Table 4. The results show that DNSK+Barrier outperforms PHP+exp in Zhu and Xie (2022) and NSMPP by achieving a higher testing log-likelihood. We visualize the basis functions of learned influence kernel by DNSK+Barrier in Figure A.4 in Appendix. 6 CONCLUSION We propose a deep non-stationary kernel for spatio-temporal point processes using a low-rank parameterization based on displacement, which enables the model to be further low-rank when learning complicated influence kernel and significantly reduces model complexity. The non-negativity of the intensity is guaranteed by a log-barrier method that maintains the linearity of the conditional intensity function. Based on that, we propose a computationally efficient strategy for model estimation. The superior performance of our model is demonstrated using synthetic and real data sets. ACKNOWLEDGEMENT The work is partially supported by NSF DMS-2134037. Z.D. and Y.X. are partially supported by an NSF CAREER CCF-1650913, and NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, and DMS-1830210. X.C. is partially supported by NSF and the Alfred P. Sloan Foundation. A ADDITIONAL METHODOLOGY DETAILS A.1 DERIVATION OF EQUATION 4 We denote τ := t−t′, ν := s−s′, the variables t′ ∈ [0, T ], τ ∈ [0, τmax], s′ ∈ S and ν ∈ B(0, amax), where the sets S, B(0, amax) ⊂ R2. Viewing the spatial and temporal variables, i.e., (t′, τ) and (s′, ν), as left and right mode variables, respectively, the kernel function SVD (Mollenhauer et al., 2020; Mercer, 1909) of k gives that k(t′, τ, s′, ν) = ∞∑ k=1 σkgk(t ′, τ)hk(s ′, ν). (A.1) We assume that the SVD can be truncated at k ≤ K with a residual of ε for some small ε > 0, and this holds as long as the singular values σk decay sufficiently fast. To fulfill the approximate finite-rank representation, it suffices to have the scalars σk and the functions gk and hk so that the expansion approximates the kernel k, even if they are not SVD of the kernel. This leads to the following assumption: Assumption A.1. There exist coefficients σk, and functions gk(t′, τ), hk(s′, ν) s.t. k(t′, τ, s′, ν) = K∑ k=1 σkgk(t ′, τ)hk(s ′, ν) +O(ε). (A.2) To proceed, one can apply kernel SVD again to gk and hk respectively, and obtain left and right singular functions that potentially differ for different k. Here, we impose that across k = 1, · · · ,K, the singular functions of gk are the same (as shown below, being approximately same suffices) set of basis functions, that is, gk(t ′, τ) = ∞∑ l=1 βk,lψl(t ′)φl(τ). As we will truncate l to be up to a finite rank again (up to an O(ε) residual) we require the (approximately) shared singular modes only up to L. Similarly as above, technically it suffices to have a finite-rank expansion to achieve the O(ε) error without requiring them to be SVD, which leads to the following assumption where we assume the same condition for hk: Assumption A.2. For the gk and hk in equation A.2, up to an O(ε) error, (i) The K temporal kernel functions gk(t′, τ) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients βkl, and functions ψl(t′), φl(τ) for l = 1, · · · , L, s.t. gk(t ′, τ) = L∑ l=1 βklψl(t ′)φl(τ) +O(ε), k = 1, · · · ,K. (A.3) (ii) The K spatial kernel functions hk(s′, ν) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients γkr, and functions ur(s′), vr(ν) for r = 1, · · · , R, s.t. hk(s ′, ν) = R∑ r=1 γkrur(t ′)vr(ν) +O(ε), r = 1, · · · , R. (A.4) Inserting equation A.3 and equation A.4 into equation A.2 gives the rank-truncated representation of the kernel function. Since K, L, R are fixed numbers, assuming boundedness of all the coefficients and functions, we have the representation with the final residual as O(ε), namely, k(t′, τ, s′, ν) = L∑ l=1 R∑ r=1 K∑ k=1 σkβklγkrψl(t ′)φl(τ)ur(t ′)vr(ν) +O(ε). Defining ∑K k=1 σkβklγkr as αlr leads to equation 4. A.2 ALGORITHMS Algorithm 1 Model parameter estimation Input: Training set X , batch size M , epoch number E, learning rate γ, constant a > 1 to update s in equation 6. Initialization: model parameter θ0, first epoch e = 0, s = s0. while e < E do for each batch with size M do 1. For 1D temporal point process, compute ℓ(θ), {λ(tct)}ct=1,...,|Ubar,t|. For spatio-temporal point process, compute ℓ(θ), {λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|. 2. Set b = min{λ(tct)}ct=1,...,|Ubar,t|−ϵ (or min{{λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|−ϵ), where ϵ is a small value to guarantee logarithm feasibility. 3. Compute L(θ) = −ℓ(θ) + 1wp(θ, b). 4. Update θe+1 ← θe − γ ∂L∂θe . 5. e← e+ 1, w ← w · a end for end while Algorithm 2 Synthetic data generation Input: Model λ(·), T,S, Upper bound of conditional intensity λ̄. Initialization: HT = ∅, t = 0, n = 0 while t < T do 1. Sample u ∼ Unif(0, 1). 2. t← t− lnu/λ̄. 3. Sample s ∼ Unif(S), D ∼ Unif(0, 1). 4. λ = λ(t, s|HT ). if Dλ̄ ≤ λ then n← n+ 1; tn = t, sn = s. HT ← HT ∪ {(tn, sn)}. end if end while if tn >= T then returnHT − {(tn, sn)} else returnHT end if A.3 GRID-BASED MODEL COMPUTATION In this section, we elaborate on the details of the grid-based efficient model computation. In Figure A.1, we visualize the procedure of computing the integrals of ∫ T−ti 0 φl(t)dt and ∫ S vr(s− si)ds in equation 8, respectively. Panel (a) illustrates the calculation of ∫ T−ti 0 φl(t)dt. As explained in Section 4.2, the evaluations of φl only happens on the grid Ut over [0, τmax] (since φl(t) = 0 when t > τmax). The value of F (t) = ∫ t 0 φl(τ)dτ on the grid can be obtained through numerical integration. Then given ti, the value of F (T − ti) = ∫ T−ti 0 φl(t)dt is calculated using linear interpolation of F on two adjacent grid points of T − ti. Panel (b) shows the computation of ∫ S vr(s− si)ds. Given si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds since vr(s) = 0 when s > amax. Then B(0, amax) is discretized into the grid Us, and ∫ S vr(s− si)ds can be calculated based on the value of vr on the grid points in Us ∩ S − si (the deep red dots in Figure A.1(b)) using numerical integration. To evaluate the sensitivity of our model to the chosen grids, we compare the performance of DNSK+Barrier on 3D Data set 2 using grids with different resolutions. The quantitative results of testing log-likelihood and intensity prediction error are reported in Table A.1. We use |Ut| = 50, |Us| = 1500 for the experiments in the main paper. As we can see, the model shows similar performances when a higher grid resolution is used and works slightly less accurately but still better than other baselines with less number of grid points. It reveals that our choice of grid resolution is accurate enough to capture the complex dynamics of event occurrences for this non-stationary data, and the model performance is robust to different grid resolutions. In practice, the grids can be flexibly chosen to reach the balance of model accuracy and computational efficiency. For instance, the number of uniformly distributed grid points along one dimension can be chosen around O(n0), where n0 is the average number of events in one observed sequence. Note that |Ut| or |Us| would be far less than the total number of observed events because we use thousands of sequences (2000 in our synthetic experiments) for model learning. And the grid size can be even smaller when it comes to non-Lebesgue-measured space. A.4 DETAILS OF COMPUTATIONAL COMPLEXITY We provide the detailed analysis of the O(n) computation complexity of L(θ) in Section 4.3 as following: • Computation of log-summation. The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R + L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. With the assumption that the conditional intensity is bounded by a constant C in a finite time horizon (Lewis and Shedler, 1979; Daley et al., 2003; Zhu et al., 2022), for each fixed j, the cardinality of set {(i, j) | tj < ti ≤ tj + τmax} is less than Cτmax, which leads to a O(RCτmaxn) complexity of {vr}Rr=1 evaluation. • Computation of integral. The integration of {φl}Ll=1 only relies on numerical operations of {φl}Ll=1 on grids Ut without extra evaluations of neural networks. The integration of {vr}Rr=1 depends on the evaluation on grid Us of O(R|Us|) complexity. • Computation of barrier. {φl}Ll=1 on grid Ubar,t is estimated by numerical interpolation of previously computed {φl}Ll=1 on grid Ut. Additional neural network evaluations of {vr}Rr=1 cost no more than O(RCτmaxn) complexity. B DEEP NON-STATIONARY KERNEL FOR MARKED STPPS In marked STPPs (Reinhart, 2018), each observed event is associated with additional information describing event attribute, denoted as m ∈M ⊂ RdM . LetH = {(ti, si,mi)}ni=1 denote the event sequence. Given the observed history Ht = {(ti, si,mi) ∈ H|ti < t}, the conditional intensity function of a marked STPPs is similarly defined as: λ (t, s,m) = lim ∆t↓0,∆s↓0,∆m↓0 E [N([t, t+∆t]×B(s,∆s)×B(m,∆m)) | Ht] |B(s,∆s)||B(m,∆m)|∆t , where B(m,∆m) is a ball centered at m ∈ RdM with radius ∆m. The log-likelihood of observing H on [0, T ]× S ×M is given by ℓ(H) = n∑ i=1 log λ (ti, si,mi)− ∫ T 0 ∫ S ∫ M λ(t, s,m)dmdsdt. B.1 KERNEL INCORPORATING MARKS One of the salient features of our spatio-temporal kernel framework is that it can be conveniently adopted in modeling marked STPPs with additional sets of mark basis functions {gq, hq}Qq=1. We modify the influence kernel function k accordingly as following: k(t′, t− t′, s′, s− s′,m′,m) = Q∑ q=1 R∑ r=1 L∑ l=1 αlrqψl(t ′)φl(t− t′)ur(s′)vr(s− s′)gq(m′)hq(m). Here m′,m ∈M ⊂ RdM and {gq, hq :M→ R, q = 1, . . . , Q} represented by independent neural networks model the influence of historical mark m′ and current mark m, respectively. Since the mark spaceM is always categorical and the difference between m′ and m is of little practical meaning, we use gq and hq to model m′ and m separately instead of modeling m−m′. B.2 LOG-BARRIER AND MODEL COMPUTATION The conditional intensity for marked spatio-temporal point processes at (t, s,m) can be written as: λ(t, s,m) = µ+ ∑ l,r,q αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi)hq(m). We need to guarantee the non-negativity of λ over the space of [0, T ] × S ×M. When the total number of unique categorical mark inM is small, the log-barrier can be conveniently computed as the summation of λ on grids Ubar,t × Ubar,s ×M. In the following we focus on the case thatM is high-dimensional with O(n) number of unique marks. For model simplicity we use non-negative gq and hq in this case (which can be done by adding a non-negative activation function to the linear output layer in neural networks). We re-write λ(t, s,m) and denote as following: λ(t, s,m) = µ+ ∑ q ∑ l,r αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi) ︸ ︷︷ ︸ F̂q(t,s) hq(m). Note that the function in the brackets are only with regard to t, s. We denote it as F̂q(t, s) (since it is in the rth rank of mark). Since hq(m) ≥ 0, the non-negativity of λ can be guaranteed by the non-negativity of F̂q(t, s). Thus we apply log-barrier method on F̂q(t, s). The log-barrier term becomes: p(θ, b) := − 1 Q|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 Q∑ q=1 log(F̂q(tct , scs)− b), Since our model is low-rank, the value of Q will not be large. For the model computation, the additional evaluations for {gq}Qq=1 on events is ofO(Qn) complexity and the evaluations for {hq}Qq=1 only depends on the unique number of marks which at most of O(n). The log-barrier method does not introduce extra evaluation in mark space. Thus the overall computation complexity for DNSK in marked STPPs is still O(n). C ADDITIONAL EXPERIMENTAL RESULTS In this section we provide details of data sets and experimental setup, together with additional experimental results. Synthetic data sets. To show the robustness of our model, we generate three temporal data sets and three spatio-temporal data sets using the following kernels: (i) 1D Data set 1 with exponential kernel: k(t′, t) = 0.8e−(t−t ′). (ii) 1D Data set 2 with non-stationary kernel: k(t′, t) = 0.3(0.5 + 0.5 cos(0.2t′))e−2(t−t ′). (iii) 1D Data set 3 with infinite rank kernel: k(t′, t) = 0.3 ∞∑ j=1 2−j ( 0.3 + cos(2 + ( t′ 5 )0.71.3(j + 1)π) ) e− 8(t−t′)2 25 j 2 (iv) 2D Data set 1 with exponential kernel: k(t′, t, s′, s) = 0.5e−1.5(t−t ′)e−0.8s ′ . (v) 3D Data set 1 with non-stationary inhibition kernel: k(t′, t, s′, s) = 0.3(1− 0.01t)e−2(t−t ′) 1 2πσ2s′ e − ∥s ′∥2 2σ2 s′ cos (10∥s− s′∥) 2πσ2s(1 + e 10(∥s−s′∥−0.5) e − ∥s−s ′∥2 2σ2s , where σs′ = 0.5, σs = 0.15. (vi) 3D Data set 2 with non-stationary mixture kernel: k(t′, t, s′, s) = 2∑ r=1 2∑ l=1 αrlur(s ′)vr(s− s′)ψl(t′)φl(t− t′) , where u1(s′) = 1−as(s′2+1), u2(s′) = 1−bs(s′2+1), v1(s−s′) = 12πσ21 e − ∥s−s ′∥2 2σ21 , v2(s− s′) = 1 2πσ22 e − ∥s−s ′−0.8∥2 2σ22 , ψ1(t ′) = 1 − att′, ψ2(t′) = 1 − btt′, φ1(t − t′) = e−β(t−t ′), φ2(t− t′) = (t− t′− 1) · I(t− t′ < 3), and as = 0.3, bs = 0.4, at = 0.02, bt = 0.02, σ1 = 0.2, σ2 = 0.3, β = 2, (α11, α12, α21, α22) = (0.6, 0.15, 0.225, 0.525). Note that kernel (iii) is the one we illustrated in Figure 1, which is of infinite rank according to the formulas. In Figure 1, the value matrix of k(t′, t) and k(t′, t − t′) are the kernel evaluations on a same 300× 300 uniform grid. As we can see, the rank of the value matrix of the same kernel k is reduced from 298 to 7 after changing to the displacement-based kernel parameterization. Details of Experimental setup. For RMTPP and NH we test embedding size of {32, 64, 128} and choose 64 for experiments. For THP we take the default experiment setting recommended by Zuo et al. (2020). For NSMPP we use the same model setting in Zhu et al. (2022) with rank 5. Each experiment is implemented by the following procedure: Given the data set, we split 90% of the sequences as training set and 10% as testing set. We use independent fully-connected neural networks with two-hidden layers for each basis function. Each layer contains 64 hidden nodes. The temporal rank of DNSK+Barrier is set to be 1 for synthetic data (i), (ii), (iv), (v), 2 for (vi), and 3 for (iii). The spatial rank is 1 for synthetic data (iv), (v) and 2 for (vi). The temporal and spatial rank for real data are both set to be 2 through cross validation. For each real data set, the τmax is chosen to be around T/4 and smax is 1 for each data set since the location space is normalized before training. The hyper-parameter of DNSK+Softplus are the same as DNSK+Barrier. For RMTPP, NH, and THP the batch size is 32 and the learning rate is 10−3. For others, the batch size is 64 and the learning rate is 10−1. The quantitative results are collected by running each experiment for 5 independent times. All experiments are implemented on Google Colaboratory (Pro version) with 25GB RAM and a Tesla T4 GPU. C.1 SYNTHETIC RESULTS WITH 2D & 3D KERNEL In this section we present additional experiment results for the synthetic data sets with 2D exponential and 3D non-stationary mixture kernel. Our proposed model successfully recovers the kernel and event conditional intensity in both case. Note that the recovery of 3D mixture kernel demonstrates the capability of our model to handle complex event dependency with mixture patterns by conveniently setting time and mark rank to be more than 1. C.2 ATLANTA TEXTUAL CRIME DATA WITH HIGH-DIMENSIONAL MARKS Figure A.4 visualizes the fitting and prediction results of DNSK+Barrier. Our model presents an decaying pattern in temporal effect and captures two different patterns of spatial influence for incidents in the northeast. Besides, the in-sample and out-of-sample intensity predictions demonstrate the ability of DNSK to characterize the event occurrences by showing different conditional intensities.
1. What is the focus and contribution of the paper regarding spatio-temporal point processes? 2. What are the strengths of the proposed approach, particularly in terms of reducing model complexity and computational efficiency? 3. What are the weaknesses of the paper, such as the need for more explanations, clarifications, and discussions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a deep non-stationary kernel for spatio-temporal point processes using a different parameterization scheme, which reduces the model complexity. The non-negativity of the solution is guaranteed by a log-barrier method which maintains the linearity of the conditional intensity function. In addition, a computationally efficient strategy for model estimation is introduced. Both the synthetic and real data sets are used to validate the superiority of the proposed model. Strengths And Weaknesses Strength: The proposed methods significantly reduces the model complexity without sacrificing the performances. The motivation is clear and the contribution is satisfied. Comprehensive and well designed experiments are conducted. This paper is easy to follow with good presentation. Weaknesses: Some preliminary knowledge needs to be included to make the paper self-contained. For example, it would be better to briefly describe the "mask point process". Some notations need more clarifications. For example, what "B" refers to in Equation 2. And what the dimension of "s". In Equation 4, there is operation of "s-s'", while "s" refers to the location which should be a 2-D or 3-D value. How the "-" operation is conducted in these values? More discussions on the motivation of specific techniques can be provided. For example, compared to directly using the neural network, what is the advantage of the kernel in Equation 4. No clear problem formulation. For better understanding the problem and methodology, it will be great to have a formal problem formulation. The meaning of "testing l" in Table 4 is not explained. Clarity, Quality, Novelty And Reproducibility This paper have a clear presentation towards the methodology. The authors does not provide the source codes and there are no detailed hyper-parameters towards the architecture of the model. The novelty of this paper is satisfied, which proposes a deep non-stationary kernel for spatio-temporal point processes using a different parameterization scheme. Additional optimization strategy is accommodated with the proposed schema.
ICLR
Title Spatio-temporal point processes with deep non-stationary kernels Abstract Point process data are becoming ubiquitous in modern applications, such as social networks, health care, and finance. Despite the powerful expressiveness of the popular recurrent neural network (RNN) models for point process data, they may not successfully capture sophisticated non-stationary dependencies in the data due to their recurrent structures. Another popular type of deep model for point process data is based on representing the influence kernel (rather than the intensity function) by neural networks. We take the latter approach and develop a new deep non-stationary influence kernel that can model non-stationary spatio-temporal point processes. The main idea is to approximate the influence kernel with a novel and general low-rank decomposition, enabling efficient representation through deep neural networks and computational efficiency and better performance. We also take a new approach to maintain the non-negativity constraint of the conditional intensity by introducing a log-barrier penalty. We demonstrate our proposed method’s good performance and computational efficiency compared with the state-of-the-art on simulated and real data. 1 INTRODUCTION Point process data, consisting of sequential events with timestamps and associated information such as location or category, are ubiquitous in modern scientific fields and real-world applications. The distribution of events is of great scientific and practical interest, both for predicting new events and understanding the events’ generative dynamics (Reinhart, 2018). To model such discrete events in continuous time and space, spatio-temporal point processes (STPPs) are widely used in a diverse range of domains, including modeling earthquakes (Ogata, 1988; 1998), the spread of infectious diseases (Schoenberg et al., 2019; Dong et al., 2021), and wildfire propagation (Hering et al., 2009). A modeling challenge is to accurately capture the underlying generative model of event occurrence in general spatio-temporal point processes (STPP) while maintaining the model efficiency. Specific parametric forms of conditional intensity are proposed in seminal works of Hawkes process (Hawkes, 1971; Ogata, 1988) to tackle the issue of computational complexity in STPPs, which requires evaluating the complex multivariate integral in the likelihood function. They use an exponentially decaying influence kernel to measure the influence of a past event over time and assume the influence of all past events is positive and linearly additive. Despite computational simplicity (since the integral of the likelihood function is avoided), such a parametric form limits the model’s practicality in modern applications. Recent models use neural networks in modeling point processes to capture complicated event occurrences. RNN (Du et al., 2016) and LSTM (Mei and Eisner, 2017) have been used by taking advantage of their representation power and capability in capturing event temporal dependencies. However, the recurrent structures of RNN-based models cannot capture long-range dependency (Bengio et al., 1994) and attention-based structure (Zhang et al., 2020; Zuo et al., 2020) is introduced to address such limitations of RNN. Despite much development, existing models still cannot sufficiently capture spatio-temporal non-stationarity, which are common in real-world data (Graham et al., 2013; Dong et al., 2021). Moreover, while RNN-type models may produce strong prediction performance, the models consist of general forms of network layers and the modeling power relies on the hidden states, thus often not easily interpretable. A promising approach to overcome the above model restrictions is point process models that combine statistical models with neural network representation, such as Zhu et al. (2022) and Chen et al. (2020), to enjoy both the interpretability and expressive power of neural networks. In particular, the idea is to represent the (possibly non-stationary) influence kernel based on a spectral decomposition and represent the basis functions using neural networks. However, the prior work (Zhu et al., 2022) is not specifically designed for non-stationary kernel and the low-rank representation can be made significantly more efficient, which is the main focus of this paper. Contribution. In this paper, we develop a non-stationary kernel (referred to as DNSK) for (possibly non-stationary) spatio-temporal processes that enjoy efficient low-rank representation, which leads to much improved computational efficiency and predictive performance. The construction is based on an interesting observation that by reparameterize the influence kernel from the original form of k(t′, t), (where t′ is the historical even time, and t is the current time) to an equivalent form k(t′, t − t′) (which thus is parameterized by the displacement t− t′ instead), the rank can be reduced significantly, as shown in Figure 1. This observation inspired us to design a much more efficient representation of the non-stationary point processes with much fewer basis functions to represent the same kernel. In summary, the contributions of our paper include • We introduce an efficient low-rank representation of the influence kernel based on a novel “dis- placement” re-parameterization. Our representation can well-approximate a large class of general non-stationary influence kernels and is generalizable to spatio-temporal kernels (also potentially to data with high-dimensional marks). Efficient representation leads to lower computational cost and better prediction power, as demonstrated in our experiments. • In model fitting, we introduce a log-barrier penalty term in the objective function to ensure the non-negative conditional intensity function so the model is statistically meaningful, and the problem is numerically stable. This approach also enables the model to learn general influence functions (that can have negative values), which is a drastic improvement from existing influence kernel-based methods that require the kernel functions to be non-negative. • Using extensive synthetic and real data experiments, we show the competitive performance of our proposed methods in both model recovery and event prediction compared with the state-of-the-art, such as the RNN-based and transformer-based models. 1.1 RELATED WORKS The original work of A. Hawkes (Hawkes, 1971) provides classic self-exciting point processes for temporal events, which express the conditional intensity function with an influence kernel and a base rate. Ogata (1998) proposes a parametric form of spatio-temporal influence kernel which enjoys strong model interpretability and efficiency. However, such simple parametric forms own limited expressiveness in characterizing the complex events’ dynamic in modern applications (Zhu et al., 2021; Liao et al., 2022). Neural networks have been widely adopted in point processes (Xiao et al., 2017; Chen et al., 2020; Zhu et al., 2021). Du et al. (2016) incorporates recurrent neural networks and Mei and Eisner (2017) use a continuous-time invariant of LSTM to model event influence with exponential decay over time. These RNN-based models may be unable to capture complicated event dependencies due to the recurrent structure. Zhang et al. (2020); Zuo et al. (2020) introduce self-attentive structures into point processes for their capability to memorize long-term influence by dealing with an event sequence as a whole. The main limitation is that they assume a dot-product-based score function and assume linearly decaying of event influence. Omi et al. (2019) propose a fully-connected neural network to model the cumulative intensity function to go beyond parametric decaying influence. However, the event embeddings are still generated by RNN, and fitting cumulative intensity function by neural networks lacks model interpretability. Note that all the above models tackle temporal events with categorical marks, which are inapplicable in continuous time and location space. Recent works adopt neural networks in learning the influence kernel function. The kernel introduced in Okawa et al. (2021) uses neural networks to model the latent dynamic of time interval but still assumes an exponentially decaying influence over time. Zhu et al. (2022) proposes a kernel representation using spectral decomposition and represents feature functions using deep neural networks to harvest powerful model expressiveness when dealing with marked event data. Our method considers an alternative novel kernel representation that allows the general kernel to be expressed further low-rankly. 2 BACKGROUND Spatio-temporal point processes (STPPs) (Reinhart, 2018; Moller and Waagepetersen, 2003) have been widely used to model sequences of random events that happen in continuous time and space. Let H = {(ti, si)}ni=1 denote the event stream with time ti ∈ [0, T ] ⊂ R and location si ∈ S ⊂ RdS of ith event. The event number n is also random. Given the observed historyHt = {(ti, si) ∈ H|ti < t} before time t, an STPP is then fully characterized by the conditional intensity function λ (t, s | Ht) = lim ∆t↓0,∆s↓0 E [N([t, t+∆t]×B(s,∆s)) | Ht] |B(s,∆s)|∆t , (1) where B(s,∆s) is a ball centered at s ∈ RdS with radius ∆s, and the counting measure N is defined as the number of events occurring in [t, t + ∆t] × B(s,∆s) ⊂ RdS+1. Naturally λ (t, s|Ht) ≥ 0 for any arbitrary t and s. In the following, we omit the dependency on historyHt and use common shorthand λ(t, s). The log-likelihood of observingH on [0, T ]× S is given by (Daley et al., 2003) ℓ(H) = n∑ i=1 log λ (ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt (2) Neural point processes parameterize the conditional intensity function by taking advantage of recurrent neural networks (RNNs). In Du et al. (2016), an input vector xi which extracts the information of event ti and the associated event attributes mi (can be event mark or location) is fed into the RNN. A hidden state vector hi is updated by hi = ρ(hi−1,xi), where ρ(·) is a mapping fulfilled by recurrent neural network operations. The conditional intensity function on (ti, ti+1] is then defined as λ(t) = δ(t,hi), where δ is an exponential transformation that guarantees a positive intensity. In Mei and Eisner (2017) the RNN is replaced by a continuous-time LSTM module with hidden states h(t) defined on [0, T ] and a Softplus function δ. Attention-based models are introduced in Zuo et al. (2020); Zhang et al. (2020) to overcome the inability of RNNs to capture sophisticated event dependencies due to their recurrent structures. Hawkes process (Hawkes, 1971) is a well-known generalized point process model. Assuming the influences from past events are linearly additive, the conditional intensity function takes the form of λ(t, s) = µ+ ∑ (t′,s′)∈Ht k(t′, t, s′, s), (3) where k is an influence kernel function that captures event interactions. Commonly the kernel function is assumed to be stationary, that is, k only depends on t − t′ and s − s′, which limits the model expressivity. In this work, we aim to capture complicated non-stationarity in spatio-temporal event dependencies by leveraging the strong approximation power of neural networks in kernel fitting. 3 LOW-RANK DEEP NON-STATIONARY KERNEL Due to the intricate dependencies between events, it is challenging to choose the form of kernel function that achieves great model expressiveness while enjoying high model efficiency. In this section, we introduce a unified model with a low-rank deep non-stationary kernel to capture the complex heterogeneity in events’ influence over spatio-temporal space. 3.1 KERNEL WITH HISTORY AND SPATIO-TEMPORAL DISPLACEMENT For the influence kernel function k(t′, t, s′, s), by using the displacements in t and s as variables, we first re-parameterize the kernel as k(t′, t−t′, s′, s−s′), where the minus in s−s′ refers to element-wise difference between s and s′ when dS > 1. Then we achieve a finite-rank decomposed representation based on (truncated) singular value decomposition (SVD) for kernel functions (Mollenhauer et al., 2020) (which can be understood as the kernel version of matrix SVD, where the eigendecomposition is based on Mercer’s Theorem (Mercer, 1909)), and that the decomposed spatial (and temporal) kernel functions can be approximated under shared basis functions (cf. Assumption A.2). The resulting approximate finite-rank representation is written as (details are in Appendix A.1) k(t′, t− t′, s′, s− s′) = R∑ r=1 L∑ l=1 αlrψl(t ′)φl(t− t′)ur(s′)vr(s− s′). (4) Here {ψl, φl : [0, T ]→ R, l = 1, . . . , L} are two sets of temporal basis functions that characterize the temporal influence of event at t′ and the decaying effect brought by elapsed time t− t′. Similarly, spatial basis functions {ur, vr : S → R, r = 1, . . . , R} capture the spatial influence of event at s′ and the decayed influence after spreading over the displacement of s − s′. The corresponding weight αlr at different spatio-temporal ranks combines each set of basis functions into a weighted summation, leading to the final expression of influence kernel k. To further enhance the model expressiveness, we use a fully-connected neural network to represent each basis function. The history or displacement is taken as the input and fed through multiple hidden layers equipped with Softplus non-linear activation function. To allow for inhibiting influence from past events (negative value of influence kernel k), we use a linear output layer for each neural network. For an influence kernel with temporal rank L and spatial rank R, we need 2(L + R) independent neural networks for modeling. The benefits of our proposed kernel framework lies in the following: (i) The kernel parameterization with displacement significantly reduces the rank needed when representing the complicated kernel encountered in practice as shown in Figure 1. (ii) The non-stationarity of original influence of historical events over spatio-temporal space can be conveniently captured by in-homogeneous {ψl}Ll=1, {ur}Rr=1, making the model applicable in modeling general STPPs. (iii) The propagating patterns of influence are characterized by {φl}Ll=1, {vr}Rr=1 which go beyond simple parametric forms. In particular, when the events’ influence has finite range, i.e. there exist τmax and amax such that the influence decays to zero if |t− t′| > τmax or ||s− s′|| > amax, we can restrict the parameterization of {φl}Ll=1 and {vr}Rr=1 on a local domain [0, τmax] × B(0, amax) instead of [0, T ] × S, which further reduce the model complexity. Details of choosing kernel and neural network architectures are described in Appendix C. Remark 1 (the class of influence kernel expressed). The proposed deep kernel representation covers a large class of non-stationary kernels generally used in STPPs. In particular, the proposed form of the kernel does not need to be positive semi-definite or even symmetric (Reinhart, 2018). The low-rank decomposed formulation equation 4 is of SVD-type (cf. Appendix A.1). While each φl (and vr) can be viewed as stationary (i.e., shift-invariant), the combination with left modes in the summation enables to model spatio-temporal non-stationarity. The technical assumptions A.1 and A.2 do not require more than the existence of a low-rank decomposition motivated by kernel SVD. As long as the 2(R+ L) many functions ψl, φl, and ur, vr are sufficiently regular, they can be approximated and learned by a neural network. The universal approximation power of neural networks enables our framework to express a broad range of general kernel functions, and the low-rank decomposed form reduces the modeling of a spatio-temporal kernel to finite many functions on time and space domains (the right modes are on truncated domains), respectively. 4 EFFICIENT COMPUTATION OF MODEL We consider model optimization through Maximum likelihood estimation (MLE) (Reinhart, 2018). The resulting conditional intensity function could now be negative by allowing inhibiting historical influence. A common approach to guarantee the non-negativity is to adopt a nonlinear positive activation function in the conditional intensity (Du et al., 2016; Zhu et al., 2022). However, the integral of such a nonlinear intensity over spatio-temporal space is computationally expensive. To tackle this, we first introduce a log-barrier to the MLE optimization problem to guarantee the non-negativity of conditional intensity function λ and maintain its linearity. Then we provide a computationally efficient strategy that benefits from the linearity of the conditional intensity. The extension of the approach to point process data with marks is given in Appendix B. 4.1 MODEL OPTIMIZATION WITH LOG-BARRIER We re-denote ℓ(H) in equation 2 by ℓ(θ) in terms of model parameter θ. The constrained MLE optimization problem for model parameter estimation can be formulated as: min θ −ℓ(θ), s.t.− λ(t, s) ≤ 0, ∀t ∈ [0, T ],∀s ∈ S. Introduce a log-barrier method (Boyd et al., 2004) to ensure the non-negativity of λ, and penalize the values of λ on a dense enough grid Ubar,t × Ubar,s ⊂ [0, T ]× S . The log-barrier is defined as p(θ, b) := − 1 |Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (5) where ct, cs indicate the index of the gird, and b is a lower bound of conditional intensity function on the grid to guarantee the feasibility of logarithm operation. The MLE optimization problem can be written as min θ L(θ) := −ℓ(θ) + 1 w p(θ, b) = − ( n∑ i=1 log λ(ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt ) − 1 w|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (6) where w is a weight to control the trade-off between log-likelihood and log-barrier; w and b can be set accordingly during the learning procedure. Details can be found in Appendix A.2. Note that previous works (Du et al., 2016; Mei and Eisner, 2017; Pan et al., 2021; Zuo et al., 2020; Zhu et al., 2022) use a scaled positive transformation to guarantee non-negativity conditional intensity function. Compared with them, the log-barrier method preserves the linearity of the conditional intensity function. As shown in Table 1, such a log-barrier method enables efficient model computation (See more details in Section 4.2) and enhance the model recovery power. 4.2 MODEL COMPUTATION The log-likelihood computation of general STPPs (especially those with general influence function) is often difficult and requires numerical integral and thus time-consuming. Given a sequence of events {xi = (ti, si)}ni=1 of number n, the complexity of neural network evaluation is of O(n2) for the term of log-summation and of O(Kn) (K ≫ n) when using numerical integration for the double integral term with K sampled points in a multi-dimensional space. In the following, we circumvent the calculation difficulty by proposing an efficient computation for L(θ) with complexity O(n) of neural network evaluation through a domain discretization strategy. Computation of log-summation. The first log-summation term in equation 2 can be written as: n∑ i=1 log λ(ti, si) = n∑ i=1 log µ+ ∑ tj<ti R∑ r=1 L∑ l=1 αlrψl(tj)φl(ti − tj)ur(sj)vr(si − sj) . (7) Note that each ψl only needs to be evaluated at event time {ti}ni=1 and each ur is evaluated at all the event location {si}ni=1. To avoid the redundant evaluations of φl over every pair of (ti, tj), we set up a uniform grid Ut over time horizon [0, τmax] and evaluate φl on the grid. The value of φl(tj − ti) can be obtained by linear interpolation with values on two adjacent grid points of tj − ti. By doing so, we only need to evaluate φl for |Ut| times on the grids. Note that φl can be simply feed with 0 when tj − ti > τmax without any neural network evaluation. Here we directly evaluate vr(si − sj) since numerical interpolation is less accurate in location space. Note that one does not need to evaluate every pair of index (i, j). Instead, we have I := {(i, j) | vr(si − sj) will be computed} = {(i, j) | tj < ti ≤ tj + τmax} ∩ {(i, j) | ∥si − sj∥ ≤ amax}. We use 0 to other pairs of (i, j). Computation of integral. A benefit of our approach is that we avoid numerical integration for the conditional intensity function (needed to evaluate the likelihood function), since the design of the kernel allows us to decompose the desired integral to integrating basis functions. Specifically, we have ∫ T 0 ∫ S λ(t, s)dsdt = µ|S|T + n∑ i=1 ∫ T 0 ∫ S I(ti < t)k(ti, t, si, s)dsdt = µ|S|T + n∑ i=1 R∑ r=1 ur(si) ∫ S vr(s− si)ds L∑ l=1 αrlψl(ti) ∫ T−ti 0 φl(t)dt. (8) To compute the integral of φl, we take the advantage of the pre-computed φl on the grid Ut. Let Fl(t) := ∫ t 0 φl(τ)dτ . Then Fl(T − ti) can be computed by the linear interpolation of values of Fl at two adjacent grid points (in Ut) of T − ti. In particular, Fl evaluated on Ut equals to the cumulative sum of φl divided by the grid width. The integral of vr can be estimated based on a grid Us in B(0, amax) ⊂ RdS since it decays outside the ball. For each si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds, where S − si := {s ′ | s′ = s− si, s ∈ S}. Thus the integral is well estimated with the evaluations of vr on grid set Us ∩ S − si. Note that in practice we only evaluate vr on Us once and use subsets of the evaluations for different si. More details about grid-based computation can be found in Appendix A.3. Computation of log-barrier. The barrier term p(θ, b) is calculated in a similar way as equation 7 by replacing (ti, si, µ) with (tct , scs , µ− b), i.e. we use interpolation to calculate φl(tct − tj) and evaluate vr on a subset of {(scs , sj)}, cs = 1, . . . , |Ubar,s|, j = 1, . . . , n. 4.3 COMPUTATIONAL COMPLEXITY The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R+ L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. The evaluation of {vr}Rr=1 costs no more thanO(RCτmaxn)+O(R|Us|) complexity. We note that L,R, τmax, |Ut|, |Us| are all constant that much less than event number n, thus the overall computation complexity will beO(n). We compare the model training time per epoch for a baseline equipped with a softplus activation function (NSMPP) and our model with log-barrier method (DNSK+Barrier) on a 1D synthetic data set and a 3D synthetic data set. The quantitative results in Table 1 demonstrates the efficiency improvement of our model by using log-barrier technique. More details about the computation complexity analysis can be found in Appendix A.4. 5 EXPERIMENT We use large-scale synthetic and real data sets to demonstrate the superior performance of our model and present the results in this section. Experimental details and results can be found in Appendix C. Codes will be released upon publication. Baselines. We compare our method (DNSK+Barrier) with: (i) RMTPP (RMTPP) (Du et al., 2016); (ii) Neural Hawkes (NH) (Mei and Eisner, 2017); (iii) Transformer Hawkes process (THP) (Zuo et al., 2020); (iv) Parametric Hawkes process (PHP+exp) with exponentially decaying spatiotemporal kernel; (v) Neual spectral marked point processes (NSMPP) (Zhu et al., 2022); (vi) DNSK without log-barrier but with a non-negative Softplus activation function (DNSK+Softplus). We note that RMTPP, NH and THP directly model conditional intensity function using neural networks while others learn the influence kernel in the framework of equation 3. In particular, NSMPP designs the kernel based on singular value decomposition but parameterizes it without displacement. The model parameters are estimated using the training data via Adam optimization method (Kingma and Ba, 2014). Details of training can be found in Appendix A.2 and C. 5.1 SYNTHETIC DATA EXPERIMENTS Synthetic data sets. To show the effectiveness of DNSK+Barrier, we conduct all the models on three temporal data sets and three spatio-temporal data sets generated by following true kernels: (i) 1D exponential kernel (ii) 1D non-stationary kernel; (iii) 1D infinite rank kernel; (iv) 2D exponential kernel; (v) 3D non-stationary inhibition kernel; (vi) 3D non-stationary mixture kernel. Data sets are generated using thinning algorithm in Daley and Vere-Jones (2008). Each data set is composed of 2000 sequences. Details of kernel formulas and data generation can be found in Appendix C. We consider two performance metrics for testing data evaluation: Mean relative error (MRE) of the predicted intensity and log-likelihood. The true and predicted λ∗(x), λ̂(x) can be calculated using equation 4 with true and learned kernel. The MRE for one test trajectory is defined as∫ X |λ ∗(x)− λ̂(x)|/λ∗(x)dx and the averaged MRE over all test trajectories is reported. The loglikelihood for observing each testing sequence can be computed according to equation 2, and the average predictive log-likelihood per event is reported. The log-likelihood shows the model’s goodness-of-fit, and the intensity evaluation further reflects the model’s ability to recover the underlying mechanism of event occurrence and predict the future. The heat maps in Figure 2 visualize the results of non-stationary kernel recovery for DNSK+Barrier and NSMPP on 1D Data set 2 and 3 (The true kernel used in 1D Data set 3 is the one in Figure 1). DNSK+Barrier recovers the true kernel more accurately than NSMPP, indicating the strong representation power of the low-rank kernel parameterization with displacements. Line charts in Figure 2 present the recovered intensities with the true ones (dark grey curves). It demonstrates that our method can accurately capture the temporal dynamics of events. In particular, the average conditional intensity λ over multiple testing sequences shows the model’s ability to recover data non-stationarity over time. While DNSK+Barrier successfully captures the non-stationarity among data, both RMTPP and NH fail to do so by showing a flat curve of the averaged intensity. Note that THP with positional encoding recovers the data non-stationarity (as shown in two figures in the last column). However, our method still outperforms THP which suffers from limited model expressiveness when complicated propagation of event influence is involved (see two figures in the penultimate column). Tabel 2 summarized the quantitative results of testing log-likelihood and MRE. It shows that DNSK+Barrier has superior predictive performance against baselines in characterizing the dynamics of data generation in spatio-temporal space. Specifically, with evidently over-parameterization for 1D Data set 1 generated by a stationary exponentially decaying kernel, our model can still approximate the kernel and recover the true conditional intensity without overfitting, which shows the adaptiveness of our model. Moreover, DNSK+Barrier enjoys outstanding performance gain when learning a diverse variety of complicated non-stationary kernels. The comparison between DNSK+Softplus and DNSK+Barrier proves that the model with log-barrier achieves a better recovery performance by maintaining the linearity of the conditional intensity. THP outperforms RMTPP in non-stationary cases but is still limited due to its pre-assumed parametric form of influence propagation. More results about kernel and intensity recovery can be found in Appendix C. 5.2 REAL DATA RESULTS Real data sets. We provide a comprehensive evaluation of our approach on several real-world data sets: we first use two popular data sets containing time-stamped events with categorical marks to demonstrate the robustness of DNSK+Barrier in marked STPPs (refer to Appendix B for detailed definition and kernel modeling): (i) Financial Transactions. (Du et al., 2016). This data set contains transaction records of a stock in one day with time unit milliseconds and the action (mark) of each transaction. We partition the events into different sequences by time stamps. (ii) StackOverflow (Leskovec and Krevl, 2014): The data is collected from the website StackOverflow over two years, containing reward records for users who promote engagement in the community. Each user’s reward history is treated as a sequence. Next, we demonstrate the practical versatility of the model using the following spatio-temporal data sets: (i) Southern California earthquake data provided by Southern California Earthquake Data Center (SCEDC) contains time and location information of earthquakes in Southern California. We collect 19,414 records from 1999 to 2019 with magnitude larger than 2.5 and partition the data into multiple sequences by month with average length of 40.2. (ii) Atlanta robbery & burglary data. Atlanta Police Department (APD) provides a proprietary data source for city crime. We extract 3420 reported robberies and 14958 burglaries with time and location from 2013 to 2019. Two crime types are preprocessed as separate data sets on a 10-day basis with average lengths of 13.7 and 58.7. Finally, the model’s ability to tackle high-dimensional marks is evaluated with Atlanta textual crime data. The proprietary data set provided by APD records 4644 crime incidents from 2016 to 2017 with time, location, and comprehensive text descriptions. The text information is preprocessed by TF-IDF technique, leading to a 5012-dimensional mark for each event. Table 3 summarizes the results of models dealing with categorical marks. Event time and type prediction are evaluated by Root Mean Square Error (RMSE) and accuracy, respectively. We can see that DNSK+Barrier outperforms the baselines in all prediction tasks by providing less time RMSE and higher type accuracy. For real-world spatio-temporal data, we report average predictive log-likelihood per event for the testing set since MRE is not applicable. Besides, we perform online prediction for earthquake data to demonstrate the model predicting ability. The probability density function f(t, s) which represents the conditional probability that the next event will occur at (t, s) given history Ht can be written as f(t, s) = λ(t, s) exp ( − ∫ S ∫ t tn λ(τ, ν)dτdν ) . The predicted time and location of the next event can be computed as E [tn+1|Ht] = ∫∞ tn t ∫ S f(t, s)dsdt, E [sn+1|Ht] = ∫ S s ∫∞ tn f(t, s)dtds. We predict the the time and location of the last event in each sequence. The mean absolute error (MAE) of the predictions is computed. The quantitative results in Table 4 show that DNSK+Barrier provides more accurate predictions than other alternatives with higher event log-likelihood. To demonstrate our model’s interpretability and power to capture heterogeneous data characteristics, we visualize the learned influence kernels and predicted conditional intensity for two crime categories in Figure 3. The first column shows kernel evaluations at fixed geolocation in downtown Atlanta which intuitively reflect the spatial influence of crimes in that neighborhood. The influence of a robbery in the downtown area is more intensive but regional, while a burglary which is hard to be responded to by police in time would impact a larger neighborhood along major highways of Atlanta. We also provide the predicted conditional intensity over space for two crimes. As we can observe, DNSK+Barrier captures the occurrence of events in regions with a higher crime rate, and crimes of the same category happening in different regions would influence their neighborhoods differently. We note that this example emphasizes the ability of the proposed method to recover data non-stationarity with different sequence lengths, and improve the limited model interpretability of other neural network-based methods (RMTPP, NH, and THP) in practice. For Atlanta textual crime data, we borrow the idea in Zhu and Xie (2022) by encoding the highly sparse TF-IDF representation into a binary mark vector with dimension d = 50 using Restricted Boltzmann Machine (RBM) (Fischer and Igel, 2012). The average testing log-likelihoods per event for each model are reported in Table 4. The results show that DNSK+Barrier outperforms PHP+exp in Zhu and Xie (2022) and NSMPP by achieving a higher testing log-likelihood. We visualize the basis functions of learned influence kernel by DNSK+Barrier in Figure A.4 in Appendix. 6 CONCLUSION We propose a deep non-stationary kernel for spatio-temporal point processes using a low-rank parameterization based on displacement, which enables the model to be further low-rank when learning complicated influence kernel and significantly reduces model complexity. The non-negativity of the intensity is guaranteed by a log-barrier method that maintains the linearity of the conditional intensity function. Based on that, we propose a computationally efficient strategy for model estimation. The superior performance of our model is demonstrated using synthetic and real data sets. ACKNOWLEDGEMENT The work is partially supported by NSF DMS-2134037. Z.D. and Y.X. are partially supported by an NSF CAREER CCF-1650913, and NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, and DMS-1830210. X.C. is partially supported by NSF and the Alfred P. Sloan Foundation. A ADDITIONAL METHODOLOGY DETAILS A.1 DERIVATION OF EQUATION 4 We denote τ := t−t′, ν := s−s′, the variables t′ ∈ [0, T ], τ ∈ [0, τmax], s′ ∈ S and ν ∈ B(0, amax), where the sets S, B(0, amax) ⊂ R2. Viewing the spatial and temporal variables, i.e., (t′, τ) and (s′, ν), as left and right mode variables, respectively, the kernel function SVD (Mollenhauer et al., 2020; Mercer, 1909) of k gives that k(t′, τ, s′, ν) = ∞∑ k=1 σkgk(t ′, τ)hk(s ′, ν). (A.1) We assume that the SVD can be truncated at k ≤ K with a residual of ε for some small ε > 0, and this holds as long as the singular values σk decay sufficiently fast. To fulfill the approximate finite-rank representation, it suffices to have the scalars σk and the functions gk and hk so that the expansion approximates the kernel k, even if they are not SVD of the kernel. This leads to the following assumption: Assumption A.1. There exist coefficients σk, and functions gk(t′, τ), hk(s′, ν) s.t. k(t′, τ, s′, ν) = K∑ k=1 σkgk(t ′, τ)hk(s ′, ν) +O(ε). (A.2) To proceed, one can apply kernel SVD again to gk and hk respectively, and obtain left and right singular functions that potentially differ for different k. Here, we impose that across k = 1, · · · ,K, the singular functions of gk are the same (as shown below, being approximately same suffices) set of basis functions, that is, gk(t ′, τ) = ∞∑ l=1 βk,lψl(t ′)φl(τ). As we will truncate l to be up to a finite rank again (up to an O(ε) residual) we require the (approximately) shared singular modes only up to L. Similarly as above, technically it suffices to have a finite-rank expansion to achieve the O(ε) error without requiring them to be SVD, which leads to the following assumption where we assume the same condition for hk: Assumption A.2. For the gk and hk in equation A.2, up to an O(ε) error, (i) The K temporal kernel functions gk(t′, τ) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients βkl, and functions ψl(t′), φl(τ) for l = 1, · · · , L, s.t. gk(t ′, τ) = L∑ l=1 βklψl(t ′)φl(τ) +O(ε), k = 1, · · · ,K. (A.3) (ii) The K spatial kernel functions hk(s′, ν) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients γkr, and functions ur(s′), vr(ν) for r = 1, · · · , R, s.t. hk(s ′, ν) = R∑ r=1 γkrur(t ′)vr(ν) +O(ε), r = 1, · · · , R. (A.4) Inserting equation A.3 and equation A.4 into equation A.2 gives the rank-truncated representation of the kernel function. Since K, L, R are fixed numbers, assuming boundedness of all the coefficients and functions, we have the representation with the final residual as O(ε), namely, k(t′, τ, s′, ν) = L∑ l=1 R∑ r=1 K∑ k=1 σkβklγkrψl(t ′)φl(τ)ur(t ′)vr(ν) +O(ε). Defining ∑K k=1 σkβklγkr as αlr leads to equation 4. A.2 ALGORITHMS Algorithm 1 Model parameter estimation Input: Training set X , batch size M , epoch number E, learning rate γ, constant a > 1 to update s in equation 6. Initialization: model parameter θ0, first epoch e = 0, s = s0. while e < E do for each batch with size M do 1. For 1D temporal point process, compute ℓ(θ), {λ(tct)}ct=1,...,|Ubar,t|. For spatio-temporal point process, compute ℓ(θ), {λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|. 2. Set b = min{λ(tct)}ct=1,...,|Ubar,t|−ϵ (or min{{λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|−ϵ), where ϵ is a small value to guarantee logarithm feasibility. 3. Compute L(θ) = −ℓ(θ) + 1wp(θ, b). 4. Update θe+1 ← θe − γ ∂L∂θe . 5. e← e+ 1, w ← w · a end for end while Algorithm 2 Synthetic data generation Input: Model λ(·), T,S, Upper bound of conditional intensity λ̄. Initialization: HT = ∅, t = 0, n = 0 while t < T do 1. Sample u ∼ Unif(0, 1). 2. t← t− lnu/λ̄. 3. Sample s ∼ Unif(S), D ∼ Unif(0, 1). 4. λ = λ(t, s|HT ). if Dλ̄ ≤ λ then n← n+ 1; tn = t, sn = s. HT ← HT ∪ {(tn, sn)}. end if end while if tn >= T then returnHT − {(tn, sn)} else returnHT end if A.3 GRID-BASED MODEL COMPUTATION In this section, we elaborate on the details of the grid-based efficient model computation. In Figure A.1, we visualize the procedure of computing the integrals of ∫ T−ti 0 φl(t)dt and ∫ S vr(s− si)ds in equation 8, respectively. Panel (a) illustrates the calculation of ∫ T−ti 0 φl(t)dt. As explained in Section 4.2, the evaluations of φl only happens on the grid Ut over [0, τmax] (since φl(t) = 0 when t > τmax). The value of F (t) = ∫ t 0 φl(τ)dτ on the grid can be obtained through numerical integration. Then given ti, the value of F (T − ti) = ∫ T−ti 0 φl(t)dt is calculated using linear interpolation of F on two adjacent grid points of T − ti. Panel (b) shows the computation of ∫ S vr(s− si)ds. Given si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds since vr(s) = 0 when s > amax. Then B(0, amax) is discretized into the grid Us, and ∫ S vr(s− si)ds can be calculated based on the value of vr on the grid points in Us ∩ S − si (the deep red dots in Figure A.1(b)) using numerical integration. To evaluate the sensitivity of our model to the chosen grids, we compare the performance of DNSK+Barrier on 3D Data set 2 using grids with different resolutions. The quantitative results of testing log-likelihood and intensity prediction error are reported in Table A.1. We use |Ut| = 50, |Us| = 1500 for the experiments in the main paper. As we can see, the model shows similar performances when a higher grid resolution is used and works slightly less accurately but still better than other baselines with less number of grid points. It reveals that our choice of grid resolution is accurate enough to capture the complex dynamics of event occurrences for this non-stationary data, and the model performance is robust to different grid resolutions. In practice, the grids can be flexibly chosen to reach the balance of model accuracy and computational efficiency. For instance, the number of uniformly distributed grid points along one dimension can be chosen around O(n0), where n0 is the average number of events in one observed sequence. Note that |Ut| or |Us| would be far less than the total number of observed events because we use thousands of sequences (2000 in our synthetic experiments) for model learning. And the grid size can be even smaller when it comes to non-Lebesgue-measured space. A.4 DETAILS OF COMPUTATIONAL COMPLEXITY We provide the detailed analysis of the O(n) computation complexity of L(θ) in Section 4.3 as following: • Computation of log-summation. The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R + L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. With the assumption that the conditional intensity is bounded by a constant C in a finite time horizon (Lewis and Shedler, 1979; Daley et al., 2003; Zhu et al., 2022), for each fixed j, the cardinality of set {(i, j) | tj < ti ≤ tj + τmax} is less than Cτmax, which leads to a O(RCτmaxn) complexity of {vr}Rr=1 evaluation. • Computation of integral. The integration of {φl}Ll=1 only relies on numerical operations of {φl}Ll=1 on grids Ut without extra evaluations of neural networks. The integration of {vr}Rr=1 depends on the evaluation on grid Us of O(R|Us|) complexity. • Computation of barrier. {φl}Ll=1 on grid Ubar,t is estimated by numerical interpolation of previously computed {φl}Ll=1 on grid Ut. Additional neural network evaluations of {vr}Rr=1 cost no more than O(RCτmaxn) complexity. B DEEP NON-STATIONARY KERNEL FOR MARKED STPPS In marked STPPs (Reinhart, 2018), each observed event is associated with additional information describing event attribute, denoted as m ∈M ⊂ RdM . LetH = {(ti, si,mi)}ni=1 denote the event sequence. Given the observed history Ht = {(ti, si,mi) ∈ H|ti < t}, the conditional intensity function of a marked STPPs is similarly defined as: λ (t, s,m) = lim ∆t↓0,∆s↓0,∆m↓0 E [N([t, t+∆t]×B(s,∆s)×B(m,∆m)) | Ht] |B(s,∆s)||B(m,∆m)|∆t , where B(m,∆m) is a ball centered at m ∈ RdM with radius ∆m. The log-likelihood of observing H on [0, T ]× S ×M is given by ℓ(H) = n∑ i=1 log λ (ti, si,mi)− ∫ T 0 ∫ S ∫ M λ(t, s,m)dmdsdt. B.1 KERNEL INCORPORATING MARKS One of the salient features of our spatio-temporal kernel framework is that it can be conveniently adopted in modeling marked STPPs with additional sets of mark basis functions {gq, hq}Qq=1. We modify the influence kernel function k accordingly as following: k(t′, t− t′, s′, s− s′,m′,m) = Q∑ q=1 R∑ r=1 L∑ l=1 αlrqψl(t ′)φl(t− t′)ur(s′)vr(s− s′)gq(m′)hq(m). Here m′,m ∈M ⊂ RdM and {gq, hq :M→ R, q = 1, . . . , Q} represented by independent neural networks model the influence of historical mark m′ and current mark m, respectively. Since the mark spaceM is always categorical and the difference between m′ and m is of little practical meaning, we use gq and hq to model m′ and m separately instead of modeling m−m′. B.2 LOG-BARRIER AND MODEL COMPUTATION The conditional intensity for marked spatio-temporal point processes at (t, s,m) can be written as: λ(t, s,m) = µ+ ∑ l,r,q αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi)hq(m). We need to guarantee the non-negativity of λ over the space of [0, T ] × S ×M. When the total number of unique categorical mark inM is small, the log-barrier can be conveniently computed as the summation of λ on grids Ubar,t × Ubar,s ×M. In the following we focus on the case thatM is high-dimensional with O(n) number of unique marks. For model simplicity we use non-negative gq and hq in this case (which can be done by adding a non-negative activation function to the linear output layer in neural networks). We re-write λ(t, s,m) and denote as following: λ(t, s,m) = µ+ ∑ q ∑ l,r αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi) ︸ ︷︷ ︸ F̂q(t,s) hq(m). Note that the function in the brackets are only with regard to t, s. We denote it as F̂q(t, s) (since it is in the rth rank of mark). Since hq(m) ≥ 0, the non-negativity of λ can be guaranteed by the non-negativity of F̂q(t, s). Thus we apply log-barrier method on F̂q(t, s). The log-barrier term becomes: p(θ, b) := − 1 Q|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 Q∑ q=1 log(F̂q(tct , scs)− b), Since our model is low-rank, the value of Q will not be large. For the model computation, the additional evaluations for {gq}Qq=1 on events is ofO(Qn) complexity and the evaluations for {hq}Qq=1 only depends on the unique number of marks which at most of O(n). The log-barrier method does not introduce extra evaluation in mark space. Thus the overall computation complexity for DNSK in marked STPPs is still O(n). C ADDITIONAL EXPERIMENTAL RESULTS In this section we provide details of data sets and experimental setup, together with additional experimental results. Synthetic data sets. To show the robustness of our model, we generate three temporal data sets and three spatio-temporal data sets using the following kernels: (i) 1D Data set 1 with exponential kernel: k(t′, t) = 0.8e−(t−t ′). (ii) 1D Data set 2 with non-stationary kernel: k(t′, t) = 0.3(0.5 + 0.5 cos(0.2t′))e−2(t−t ′). (iii) 1D Data set 3 with infinite rank kernel: k(t′, t) = 0.3 ∞∑ j=1 2−j ( 0.3 + cos(2 + ( t′ 5 )0.71.3(j + 1)π) ) e− 8(t−t′)2 25 j 2 (iv) 2D Data set 1 with exponential kernel: k(t′, t, s′, s) = 0.5e−1.5(t−t ′)e−0.8s ′ . (v) 3D Data set 1 with non-stationary inhibition kernel: k(t′, t, s′, s) = 0.3(1− 0.01t)e−2(t−t ′) 1 2πσ2s′ e − ∥s ′∥2 2σ2 s′ cos (10∥s− s′∥) 2πσ2s(1 + e 10(∥s−s′∥−0.5) e − ∥s−s ′∥2 2σ2s , where σs′ = 0.5, σs = 0.15. (vi) 3D Data set 2 with non-stationary mixture kernel: k(t′, t, s′, s) = 2∑ r=1 2∑ l=1 αrlur(s ′)vr(s− s′)ψl(t′)φl(t− t′) , where u1(s′) = 1−as(s′2+1), u2(s′) = 1−bs(s′2+1), v1(s−s′) = 12πσ21 e − ∥s−s ′∥2 2σ21 , v2(s− s′) = 1 2πσ22 e − ∥s−s ′−0.8∥2 2σ22 , ψ1(t ′) = 1 − att′, ψ2(t′) = 1 − btt′, φ1(t − t′) = e−β(t−t ′), φ2(t− t′) = (t− t′− 1) · I(t− t′ < 3), and as = 0.3, bs = 0.4, at = 0.02, bt = 0.02, σ1 = 0.2, σ2 = 0.3, β = 2, (α11, α12, α21, α22) = (0.6, 0.15, 0.225, 0.525). Note that kernel (iii) is the one we illustrated in Figure 1, which is of infinite rank according to the formulas. In Figure 1, the value matrix of k(t′, t) and k(t′, t − t′) are the kernel evaluations on a same 300× 300 uniform grid. As we can see, the rank of the value matrix of the same kernel k is reduced from 298 to 7 after changing to the displacement-based kernel parameterization. Details of Experimental setup. For RMTPP and NH we test embedding size of {32, 64, 128} and choose 64 for experiments. For THP we take the default experiment setting recommended by Zuo et al. (2020). For NSMPP we use the same model setting in Zhu et al. (2022) with rank 5. Each experiment is implemented by the following procedure: Given the data set, we split 90% of the sequences as training set and 10% as testing set. We use independent fully-connected neural networks with two-hidden layers for each basis function. Each layer contains 64 hidden nodes. The temporal rank of DNSK+Barrier is set to be 1 for synthetic data (i), (ii), (iv), (v), 2 for (vi), and 3 for (iii). The spatial rank is 1 for synthetic data (iv), (v) and 2 for (vi). The temporal and spatial rank for real data are both set to be 2 through cross validation. For each real data set, the τmax is chosen to be around T/4 and smax is 1 for each data set since the location space is normalized before training. The hyper-parameter of DNSK+Softplus are the same as DNSK+Barrier. For RMTPP, NH, and THP the batch size is 32 and the learning rate is 10−3. For others, the batch size is 64 and the learning rate is 10−1. The quantitative results are collected by running each experiment for 5 independent times. All experiments are implemented on Google Colaboratory (Pro version) with 25GB RAM and a Tesla T4 GPU. C.1 SYNTHETIC RESULTS WITH 2D & 3D KERNEL In this section we present additional experiment results for the synthetic data sets with 2D exponential and 3D non-stationary mixture kernel. Our proposed model successfully recovers the kernel and event conditional intensity in both case. Note that the recovery of 3D mixture kernel demonstrates the capability of our model to handle complex event dependency with mixture patterns by conveniently setting time and mark rank to be more than 1. C.2 ATLANTA TEXTUAL CRIME DATA WITH HIGH-DIMENSIONAL MARKS Figure A.4 visualizes the fitting and prediction results of DNSK+Barrier. Our model presents an decaying pattern in temporal effect and captures two different patterns of spatial influence for incidents in the northeast. Besides, the in-sample and out-of-sample intensity predictions demonstrate the ability of DNSK to characterize the event occurrences by showing different conditional intensities.
1. What is the focus of the paper regarding modeling non-stationary spatio-temporal events? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and computational complexity? 3. What are the weaknesses of the paper, especially regarding the design of the kernel function and its properties? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions about the comparison with other works, such as THP, and the advantages and disadvantages of the proposed method compared to them?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This manuscript proposes a method to model non-stationary spatio-temporal events in the framework of hawkes process. The specific method is to construct a more refined kernel function. Strengths And Weaknesses Strength: The idea of using deep non-stationary kernel in the point process to model spatial-temporal data is interesting and somewhat novel. The method is clear and computational complexity is made. The experiments are sufficient to demonstrate the effectiveness of the proposed method. Weaknesses The kernel k is designed with heuristics, and can it guarantee that k has good properties, such as to be positive definite? It is said that " (ii) The non-stationarity of events’ influence over spatial-temporal space can be conveniently captured by non-constant psi_l and u_r". I wonder if this is like a kind of positional coding of time index? If so, transformer also has the ability to model non-stationary temporal data, and the experimental results show that THP performs comparable as the proposed methods. The authors did not make a detailed analysis of THP, nor did they compare the differences and advantages of THP with the proposed methods. Clarity, Quality, Novelty And Reproducibility This manuscript is clear and somewhat novel.
ICLR
Title Spatio-temporal point processes with deep non-stationary kernels Abstract Point process data are becoming ubiquitous in modern applications, such as social networks, health care, and finance. Despite the powerful expressiveness of the popular recurrent neural network (RNN) models for point process data, they may not successfully capture sophisticated non-stationary dependencies in the data due to their recurrent structures. Another popular type of deep model for point process data is based on representing the influence kernel (rather than the intensity function) by neural networks. We take the latter approach and develop a new deep non-stationary influence kernel that can model non-stationary spatio-temporal point processes. The main idea is to approximate the influence kernel with a novel and general low-rank decomposition, enabling efficient representation through deep neural networks and computational efficiency and better performance. We also take a new approach to maintain the non-negativity constraint of the conditional intensity by introducing a log-barrier penalty. We demonstrate our proposed method’s good performance and computational efficiency compared with the state-of-the-art on simulated and real data. 1 INTRODUCTION Point process data, consisting of sequential events with timestamps and associated information such as location or category, are ubiquitous in modern scientific fields and real-world applications. The distribution of events is of great scientific and practical interest, both for predicting new events and understanding the events’ generative dynamics (Reinhart, 2018). To model such discrete events in continuous time and space, spatio-temporal point processes (STPPs) are widely used in a diverse range of domains, including modeling earthquakes (Ogata, 1988; 1998), the spread of infectious diseases (Schoenberg et al., 2019; Dong et al., 2021), and wildfire propagation (Hering et al., 2009). A modeling challenge is to accurately capture the underlying generative model of event occurrence in general spatio-temporal point processes (STPP) while maintaining the model efficiency. Specific parametric forms of conditional intensity are proposed in seminal works of Hawkes process (Hawkes, 1971; Ogata, 1988) to tackle the issue of computational complexity in STPPs, which requires evaluating the complex multivariate integral in the likelihood function. They use an exponentially decaying influence kernel to measure the influence of a past event over time and assume the influence of all past events is positive and linearly additive. Despite computational simplicity (since the integral of the likelihood function is avoided), such a parametric form limits the model’s practicality in modern applications. Recent models use neural networks in modeling point processes to capture complicated event occurrences. RNN (Du et al., 2016) and LSTM (Mei and Eisner, 2017) have been used by taking advantage of their representation power and capability in capturing event temporal dependencies. However, the recurrent structures of RNN-based models cannot capture long-range dependency (Bengio et al., 1994) and attention-based structure (Zhang et al., 2020; Zuo et al., 2020) is introduced to address such limitations of RNN. Despite much development, existing models still cannot sufficiently capture spatio-temporal non-stationarity, which are common in real-world data (Graham et al., 2013; Dong et al., 2021). Moreover, while RNN-type models may produce strong prediction performance, the models consist of general forms of network layers and the modeling power relies on the hidden states, thus often not easily interpretable. A promising approach to overcome the above model restrictions is point process models that combine statistical models with neural network representation, such as Zhu et al. (2022) and Chen et al. (2020), to enjoy both the interpretability and expressive power of neural networks. In particular, the idea is to represent the (possibly non-stationary) influence kernel based on a spectral decomposition and represent the basis functions using neural networks. However, the prior work (Zhu et al., 2022) is not specifically designed for non-stationary kernel and the low-rank representation can be made significantly more efficient, which is the main focus of this paper. Contribution. In this paper, we develop a non-stationary kernel (referred to as DNSK) for (possibly non-stationary) spatio-temporal processes that enjoy efficient low-rank representation, which leads to much improved computational efficiency and predictive performance. The construction is based on an interesting observation that by reparameterize the influence kernel from the original form of k(t′, t), (where t′ is the historical even time, and t is the current time) to an equivalent form k(t′, t − t′) (which thus is parameterized by the displacement t− t′ instead), the rank can be reduced significantly, as shown in Figure 1. This observation inspired us to design a much more efficient representation of the non-stationary point processes with much fewer basis functions to represent the same kernel. In summary, the contributions of our paper include • We introduce an efficient low-rank representation of the influence kernel based on a novel “dis- placement” re-parameterization. Our representation can well-approximate a large class of general non-stationary influence kernels and is generalizable to spatio-temporal kernels (also potentially to data with high-dimensional marks). Efficient representation leads to lower computational cost and better prediction power, as demonstrated in our experiments. • In model fitting, we introduce a log-barrier penalty term in the objective function to ensure the non-negative conditional intensity function so the model is statistically meaningful, and the problem is numerically stable. This approach also enables the model to learn general influence functions (that can have negative values), which is a drastic improvement from existing influence kernel-based methods that require the kernel functions to be non-negative. • Using extensive synthetic and real data experiments, we show the competitive performance of our proposed methods in both model recovery and event prediction compared with the state-of-the-art, such as the RNN-based and transformer-based models. 1.1 RELATED WORKS The original work of A. Hawkes (Hawkes, 1971) provides classic self-exciting point processes for temporal events, which express the conditional intensity function with an influence kernel and a base rate. Ogata (1998) proposes a parametric form of spatio-temporal influence kernel which enjoys strong model interpretability and efficiency. However, such simple parametric forms own limited expressiveness in characterizing the complex events’ dynamic in modern applications (Zhu et al., 2021; Liao et al., 2022). Neural networks have been widely adopted in point processes (Xiao et al., 2017; Chen et al., 2020; Zhu et al., 2021). Du et al. (2016) incorporates recurrent neural networks and Mei and Eisner (2017) use a continuous-time invariant of LSTM to model event influence with exponential decay over time. These RNN-based models may be unable to capture complicated event dependencies due to the recurrent structure. Zhang et al. (2020); Zuo et al. (2020) introduce self-attentive structures into point processes for their capability to memorize long-term influence by dealing with an event sequence as a whole. The main limitation is that they assume a dot-product-based score function and assume linearly decaying of event influence. Omi et al. (2019) propose a fully-connected neural network to model the cumulative intensity function to go beyond parametric decaying influence. However, the event embeddings are still generated by RNN, and fitting cumulative intensity function by neural networks lacks model interpretability. Note that all the above models tackle temporal events with categorical marks, which are inapplicable in continuous time and location space. Recent works adopt neural networks in learning the influence kernel function. The kernel introduced in Okawa et al. (2021) uses neural networks to model the latent dynamic of time interval but still assumes an exponentially decaying influence over time. Zhu et al. (2022) proposes a kernel representation using spectral decomposition and represents feature functions using deep neural networks to harvest powerful model expressiveness when dealing with marked event data. Our method considers an alternative novel kernel representation that allows the general kernel to be expressed further low-rankly. 2 BACKGROUND Spatio-temporal point processes (STPPs) (Reinhart, 2018; Moller and Waagepetersen, 2003) have been widely used to model sequences of random events that happen in continuous time and space. Let H = {(ti, si)}ni=1 denote the event stream with time ti ∈ [0, T ] ⊂ R and location si ∈ S ⊂ RdS of ith event. The event number n is also random. Given the observed historyHt = {(ti, si) ∈ H|ti < t} before time t, an STPP is then fully characterized by the conditional intensity function λ (t, s | Ht) = lim ∆t↓0,∆s↓0 E [N([t, t+∆t]×B(s,∆s)) | Ht] |B(s,∆s)|∆t , (1) where B(s,∆s) is a ball centered at s ∈ RdS with radius ∆s, and the counting measure N is defined as the number of events occurring in [t, t + ∆t] × B(s,∆s) ⊂ RdS+1. Naturally λ (t, s|Ht) ≥ 0 for any arbitrary t and s. In the following, we omit the dependency on historyHt and use common shorthand λ(t, s). The log-likelihood of observingH on [0, T ]× S is given by (Daley et al., 2003) ℓ(H) = n∑ i=1 log λ (ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt (2) Neural point processes parameterize the conditional intensity function by taking advantage of recurrent neural networks (RNNs). In Du et al. (2016), an input vector xi which extracts the information of event ti and the associated event attributes mi (can be event mark or location) is fed into the RNN. A hidden state vector hi is updated by hi = ρ(hi−1,xi), where ρ(·) is a mapping fulfilled by recurrent neural network operations. The conditional intensity function on (ti, ti+1] is then defined as λ(t) = δ(t,hi), where δ is an exponential transformation that guarantees a positive intensity. In Mei and Eisner (2017) the RNN is replaced by a continuous-time LSTM module with hidden states h(t) defined on [0, T ] and a Softplus function δ. Attention-based models are introduced in Zuo et al. (2020); Zhang et al. (2020) to overcome the inability of RNNs to capture sophisticated event dependencies due to their recurrent structures. Hawkes process (Hawkes, 1971) is a well-known generalized point process model. Assuming the influences from past events are linearly additive, the conditional intensity function takes the form of λ(t, s) = µ+ ∑ (t′,s′)∈Ht k(t′, t, s′, s), (3) where k is an influence kernel function that captures event interactions. Commonly the kernel function is assumed to be stationary, that is, k only depends on t − t′ and s − s′, which limits the model expressivity. In this work, we aim to capture complicated non-stationarity in spatio-temporal event dependencies by leveraging the strong approximation power of neural networks in kernel fitting. 3 LOW-RANK DEEP NON-STATIONARY KERNEL Due to the intricate dependencies between events, it is challenging to choose the form of kernel function that achieves great model expressiveness while enjoying high model efficiency. In this section, we introduce a unified model with a low-rank deep non-stationary kernel to capture the complex heterogeneity in events’ influence over spatio-temporal space. 3.1 KERNEL WITH HISTORY AND SPATIO-TEMPORAL DISPLACEMENT For the influence kernel function k(t′, t, s′, s), by using the displacements in t and s as variables, we first re-parameterize the kernel as k(t′, t−t′, s′, s−s′), where the minus in s−s′ refers to element-wise difference between s and s′ when dS > 1. Then we achieve a finite-rank decomposed representation based on (truncated) singular value decomposition (SVD) for kernel functions (Mollenhauer et al., 2020) (which can be understood as the kernel version of matrix SVD, where the eigendecomposition is based on Mercer’s Theorem (Mercer, 1909)), and that the decomposed spatial (and temporal) kernel functions can be approximated under shared basis functions (cf. Assumption A.2). The resulting approximate finite-rank representation is written as (details are in Appendix A.1) k(t′, t− t′, s′, s− s′) = R∑ r=1 L∑ l=1 αlrψl(t ′)φl(t− t′)ur(s′)vr(s− s′). (4) Here {ψl, φl : [0, T ]→ R, l = 1, . . . , L} are two sets of temporal basis functions that characterize the temporal influence of event at t′ and the decaying effect brought by elapsed time t− t′. Similarly, spatial basis functions {ur, vr : S → R, r = 1, . . . , R} capture the spatial influence of event at s′ and the decayed influence after spreading over the displacement of s − s′. The corresponding weight αlr at different spatio-temporal ranks combines each set of basis functions into a weighted summation, leading to the final expression of influence kernel k. To further enhance the model expressiveness, we use a fully-connected neural network to represent each basis function. The history or displacement is taken as the input and fed through multiple hidden layers equipped with Softplus non-linear activation function. To allow for inhibiting influence from past events (negative value of influence kernel k), we use a linear output layer for each neural network. For an influence kernel with temporal rank L and spatial rank R, we need 2(L + R) independent neural networks for modeling. The benefits of our proposed kernel framework lies in the following: (i) The kernel parameterization with displacement significantly reduces the rank needed when representing the complicated kernel encountered in practice as shown in Figure 1. (ii) The non-stationarity of original influence of historical events over spatio-temporal space can be conveniently captured by in-homogeneous {ψl}Ll=1, {ur}Rr=1, making the model applicable in modeling general STPPs. (iii) The propagating patterns of influence are characterized by {φl}Ll=1, {vr}Rr=1 which go beyond simple parametric forms. In particular, when the events’ influence has finite range, i.e. there exist τmax and amax such that the influence decays to zero if |t− t′| > τmax or ||s− s′|| > amax, we can restrict the parameterization of {φl}Ll=1 and {vr}Rr=1 on a local domain [0, τmax] × B(0, amax) instead of [0, T ] × S, which further reduce the model complexity. Details of choosing kernel and neural network architectures are described in Appendix C. Remark 1 (the class of influence kernel expressed). The proposed deep kernel representation covers a large class of non-stationary kernels generally used in STPPs. In particular, the proposed form of the kernel does not need to be positive semi-definite or even symmetric (Reinhart, 2018). The low-rank decomposed formulation equation 4 is of SVD-type (cf. Appendix A.1). While each φl (and vr) can be viewed as stationary (i.e., shift-invariant), the combination with left modes in the summation enables to model spatio-temporal non-stationarity. The technical assumptions A.1 and A.2 do not require more than the existence of a low-rank decomposition motivated by kernel SVD. As long as the 2(R+ L) many functions ψl, φl, and ur, vr are sufficiently regular, they can be approximated and learned by a neural network. The universal approximation power of neural networks enables our framework to express a broad range of general kernel functions, and the low-rank decomposed form reduces the modeling of a spatio-temporal kernel to finite many functions on time and space domains (the right modes are on truncated domains), respectively. 4 EFFICIENT COMPUTATION OF MODEL We consider model optimization through Maximum likelihood estimation (MLE) (Reinhart, 2018). The resulting conditional intensity function could now be negative by allowing inhibiting historical influence. A common approach to guarantee the non-negativity is to adopt a nonlinear positive activation function in the conditional intensity (Du et al., 2016; Zhu et al., 2022). However, the integral of such a nonlinear intensity over spatio-temporal space is computationally expensive. To tackle this, we first introduce a log-barrier to the MLE optimization problem to guarantee the non-negativity of conditional intensity function λ and maintain its linearity. Then we provide a computationally efficient strategy that benefits from the linearity of the conditional intensity. The extension of the approach to point process data with marks is given in Appendix B. 4.1 MODEL OPTIMIZATION WITH LOG-BARRIER We re-denote ℓ(H) in equation 2 by ℓ(θ) in terms of model parameter θ. The constrained MLE optimization problem for model parameter estimation can be formulated as: min θ −ℓ(θ), s.t.− λ(t, s) ≤ 0, ∀t ∈ [0, T ],∀s ∈ S. Introduce a log-barrier method (Boyd et al., 2004) to ensure the non-negativity of λ, and penalize the values of λ on a dense enough grid Ubar,t × Ubar,s ⊂ [0, T ]× S . The log-barrier is defined as p(θ, b) := − 1 |Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (5) where ct, cs indicate the index of the gird, and b is a lower bound of conditional intensity function on the grid to guarantee the feasibility of logarithm operation. The MLE optimization problem can be written as min θ L(θ) := −ℓ(θ) + 1 w p(θ, b) = − ( n∑ i=1 log λ(ti, si)− ∫ T 0 ∫ S λ(t, s)dsdt ) − 1 w|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 log(λ(tct , scs)− b), (6) where w is a weight to control the trade-off between log-likelihood and log-barrier; w and b can be set accordingly during the learning procedure. Details can be found in Appendix A.2. Note that previous works (Du et al., 2016; Mei and Eisner, 2017; Pan et al., 2021; Zuo et al., 2020; Zhu et al., 2022) use a scaled positive transformation to guarantee non-negativity conditional intensity function. Compared with them, the log-barrier method preserves the linearity of the conditional intensity function. As shown in Table 1, such a log-barrier method enables efficient model computation (See more details in Section 4.2) and enhance the model recovery power. 4.2 MODEL COMPUTATION The log-likelihood computation of general STPPs (especially those with general influence function) is often difficult and requires numerical integral and thus time-consuming. Given a sequence of events {xi = (ti, si)}ni=1 of number n, the complexity of neural network evaluation is of O(n2) for the term of log-summation and of O(Kn) (K ≫ n) when using numerical integration for the double integral term with K sampled points in a multi-dimensional space. In the following, we circumvent the calculation difficulty by proposing an efficient computation for L(θ) with complexity O(n) of neural network evaluation through a domain discretization strategy. Computation of log-summation. The first log-summation term in equation 2 can be written as: n∑ i=1 log λ(ti, si) = n∑ i=1 log µ+ ∑ tj<ti R∑ r=1 L∑ l=1 αlrψl(tj)φl(ti − tj)ur(sj)vr(si − sj) . (7) Note that each ψl only needs to be evaluated at event time {ti}ni=1 and each ur is evaluated at all the event location {si}ni=1. To avoid the redundant evaluations of φl over every pair of (ti, tj), we set up a uniform grid Ut over time horizon [0, τmax] and evaluate φl on the grid. The value of φl(tj − ti) can be obtained by linear interpolation with values on two adjacent grid points of tj − ti. By doing so, we only need to evaluate φl for |Ut| times on the grids. Note that φl can be simply feed with 0 when tj − ti > τmax without any neural network evaluation. Here we directly evaluate vr(si − sj) since numerical interpolation is less accurate in location space. Note that one does not need to evaluate every pair of index (i, j). Instead, we have I := {(i, j) | vr(si − sj) will be computed} = {(i, j) | tj < ti ≤ tj + τmax} ∩ {(i, j) | ∥si − sj∥ ≤ amax}. We use 0 to other pairs of (i, j). Computation of integral. A benefit of our approach is that we avoid numerical integration for the conditional intensity function (needed to evaluate the likelihood function), since the design of the kernel allows us to decompose the desired integral to integrating basis functions. Specifically, we have ∫ T 0 ∫ S λ(t, s)dsdt = µ|S|T + n∑ i=1 ∫ T 0 ∫ S I(ti < t)k(ti, t, si, s)dsdt = µ|S|T + n∑ i=1 R∑ r=1 ur(si) ∫ S vr(s− si)ds L∑ l=1 αrlψl(ti) ∫ T−ti 0 φl(t)dt. (8) To compute the integral of φl, we take the advantage of the pre-computed φl on the grid Ut. Let Fl(t) := ∫ t 0 φl(τ)dτ . Then Fl(T − ti) can be computed by the linear interpolation of values of Fl at two adjacent grid points (in Ut) of T − ti. In particular, Fl evaluated on Ut equals to the cumulative sum of φl divided by the grid width. The integral of vr can be estimated based on a grid Us in B(0, amax) ⊂ RdS since it decays outside the ball. For each si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds, where S − si := {s ′ | s′ = s− si, s ∈ S}. Thus the integral is well estimated with the evaluations of vr on grid set Us ∩ S − si. Note that in practice we only evaluate vr on Us once and use subsets of the evaluations for different si. More details about grid-based computation can be found in Appendix A.3. Computation of log-barrier. The barrier term p(θ, b) is calculated in a similar way as equation 7 by replacing (ti, si, µ) with (tct , scs , µ− b), i.e. we use interpolation to calculate φl(tct − tj) and evaluate vr on a subset of {(scs , sj)}, cs = 1, . . . , |Ubar,s|, j = 1, . . . , n. 4.3 COMPUTATIONAL COMPLEXITY The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R+ L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. The evaluation of {vr}Rr=1 costs no more thanO(RCτmaxn)+O(R|Us|) complexity. We note that L,R, τmax, |Ut|, |Us| are all constant that much less than event number n, thus the overall computation complexity will beO(n). We compare the model training time per epoch for a baseline equipped with a softplus activation function (NSMPP) and our model with log-barrier method (DNSK+Barrier) on a 1D synthetic data set and a 3D synthetic data set. The quantitative results in Table 1 demonstrates the efficiency improvement of our model by using log-barrier technique. More details about the computation complexity analysis can be found in Appendix A.4. 5 EXPERIMENT We use large-scale synthetic and real data sets to demonstrate the superior performance of our model and present the results in this section. Experimental details and results can be found in Appendix C. Codes will be released upon publication. Baselines. We compare our method (DNSK+Barrier) with: (i) RMTPP (RMTPP) (Du et al., 2016); (ii) Neural Hawkes (NH) (Mei and Eisner, 2017); (iii) Transformer Hawkes process (THP) (Zuo et al., 2020); (iv) Parametric Hawkes process (PHP+exp) with exponentially decaying spatiotemporal kernel; (v) Neual spectral marked point processes (NSMPP) (Zhu et al., 2022); (vi) DNSK without log-barrier but with a non-negative Softplus activation function (DNSK+Softplus). We note that RMTPP, NH and THP directly model conditional intensity function using neural networks while others learn the influence kernel in the framework of equation 3. In particular, NSMPP designs the kernel based on singular value decomposition but parameterizes it without displacement. The model parameters are estimated using the training data via Adam optimization method (Kingma and Ba, 2014). Details of training can be found in Appendix A.2 and C. 5.1 SYNTHETIC DATA EXPERIMENTS Synthetic data sets. To show the effectiveness of DNSK+Barrier, we conduct all the models on three temporal data sets and three spatio-temporal data sets generated by following true kernels: (i) 1D exponential kernel (ii) 1D non-stationary kernel; (iii) 1D infinite rank kernel; (iv) 2D exponential kernel; (v) 3D non-stationary inhibition kernel; (vi) 3D non-stationary mixture kernel. Data sets are generated using thinning algorithm in Daley and Vere-Jones (2008). Each data set is composed of 2000 sequences. Details of kernel formulas and data generation can be found in Appendix C. We consider two performance metrics for testing data evaluation: Mean relative error (MRE) of the predicted intensity and log-likelihood. The true and predicted λ∗(x), λ̂(x) can be calculated using equation 4 with true and learned kernel. The MRE for one test trajectory is defined as∫ X |λ ∗(x)− λ̂(x)|/λ∗(x)dx and the averaged MRE over all test trajectories is reported. The loglikelihood for observing each testing sequence can be computed according to equation 2, and the average predictive log-likelihood per event is reported. The log-likelihood shows the model’s goodness-of-fit, and the intensity evaluation further reflects the model’s ability to recover the underlying mechanism of event occurrence and predict the future. The heat maps in Figure 2 visualize the results of non-stationary kernel recovery for DNSK+Barrier and NSMPP on 1D Data set 2 and 3 (The true kernel used in 1D Data set 3 is the one in Figure 1). DNSK+Barrier recovers the true kernel more accurately than NSMPP, indicating the strong representation power of the low-rank kernel parameterization with displacements. Line charts in Figure 2 present the recovered intensities with the true ones (dark grey curves). It demonstrates that our method can accurately capture the temporal dynamics of events. In particular, the average conditional intensity λ over multiple testing sequences shows the model’s ability to recover data non-stationarity over time. While DNSK+Barrier successfully captures the non-stationarity among data, both RMTPP and NH fail to do so by showing a flat curve of the averaged intensity. Note that THP with positional encoding recovers the data non-stationarity (as shown in two figures in the last column). However, our method still outperforms THP which suffers from limited model expressiveness when complicated propagation of event influence is involved (see two figures in the penultimate column). Tabel 2 summarized the quantitative results of testing log-likelihood and MRE. It shows that DNSK+Barrier has superior predictive performance against baselines in characterizing the dynamics of data generation in spatio-temporal space. Specifically, with evidently over-parameterization for 1D Data set 1 generated by a stationary exponentially decaying kernel, our model can still approximate the kernel and recover the true conditional intensity without overfitting, which shows the adaptiveness of our model. Moreover, DNSK+Barrier enjoys outstanding performance gain when learning a diverse variety of complicated non-stationary kernels. The comparison between DNSK+Softplus and DNSK+Barrier proves that the model with log-barrier achieves a better recovery performance by maintaining the linearity of the conditional intensity. THP outperforms RMTPP in non-stationary cases but is still limited due to its pre-assumed parametric form of influence propagation. More results about kernel and intensity recovery can be found in Appendix C. 5.2 REAL DATA RESULTS Real data sets. We provide a comprehensive evaluation of our approach on several real-world data sets: we first use two popular data sets containing time-stamped events with categorical marks to demonstrate the robustness of DNSK+Barrier in marked STPPs (refer to Appendix B for detailed definition and kernel modeling): (i) Financial Transactions. (Du et al., 2016). This data set contains transaction records of a stock in one day with time unit milliseconds and the action (mark) of each transaction. We partition the events into different sequences by time stamps. (ii) StackOverflow (Leskovec and Krevl, 2014): The data is collected from the website StackOverflow over two years, containing reward records for users who promote engagement in the community. Each user’s reward history is treated as a sequence. Next, we demonstrate the practical versatility of the model using the following spatio-temporal data sets: (i) Southern California earthquake data provided by Southern California Earthquake Data Center (SCEDC) contains time and location information of earthquakes in Southern California. We collect 19,414 records from 1999 to 2019 with magnitude larger than 2.5 and partition the data into multiple sequences by month with average length of 40.2. (ii) Atlanta robbery & burglary data. Atlanta Police Department (APD) provides a proprietary data source for city crime. We extract 3420 reported robberies and 14958 burglaries with time and location from 2013 to 2019. Two crime types are preprocessed as separate data sets on a 10-day basis with average lengths of 13.7 and 58.7. Finally, the model’s ability to tackle high-dimensional marks is evaluated with Atlanta textual crime data. The proprietary data set provided by APD records 4644 crime incidents from 2016 to 2017 with time, location, and comprehensive text descriptions. The text information is preprocessed by TF-IDF technique, leading to a 5012-dimensional mark for each event. Table 3 summarizes the results of models dealing with categorical marks. Event time and type prediction are evaluated by Root Mean Square Error (RMSE) and accuracy, respectively. We can see that DNSK+Barrier outperforms the baselines in all prediction tasks by providing less time RMSE and higher type accuracy. For real-world spatio-temporal data, we report average predictive log-likelihood per event for the testing set since MRE is not applicable. Besides, we perform online prediction for earthquake data to demonstrate the model predicting ability. The probability density function f(t, s) which represents the conditional probability that the next event will occur at (t, s) given history Ht can be written as f(t, s) = λ(t, s) exp ( − ∫ S ∫ t tn λ(τ, ν)dτdν ) . The predicted time and location of the next event can be computed as E [tn+1|Ht] = ∫∞ tn t ∫ S f(t, s)dsdt, E [sn+1|Ht] = ∫ S s ∫∞ tn f(t, s)dtds. We predict the the time and location of the last event in each sequence. The mean absolute error (MAE) of the predictions is computed. The quantitative results in Table 4 show that DNSK+Barrier provides more accurate predictions than other alternatives with higher event log-likelihood. To demonstrate our model’s interpretability and power to capture heterogeneous data characteristics, we visualize the learned influence kernels and predicted conditional intensity for two crime categories in Figure 3. The first column shows kernel evaluations at fixed geolocation in downtown Atlanta which intuitively reflect the spatial influence of crimes in that neighborhood. The influence of a robbery in the downtown area is more intensive but regional, while a burglary which is hard to be responded to by police in time would impact a larger neighborhood along major highways of Atlanta. We also provide the predicted conditional intensity over space for two crimes. As we can observe, DNSK+Barrier captures the occurrence of events in regions with a higher crime rate, and crimes of the same category happening in different regions would influence their neighborhoods differently. We note that this example emphasizes the ability of the proposed method to recover data non-stationarity with different sequence lengths, and improve the limited model interpretability of other neural network-based methods (RMTPP, NH, and THP) in practice. For Atlanta textual crime data, we borrow the idea in Zhu and Xie (2022) by encoding the highly sparse TF-IDF representation into a binary mark vector with dimension d = 50 using Restricted Boltzmann Machine (RBM) (Fischer and Igel, 2012). The average testing log-likelihoods per event for each model are reported in Table 4. The results show that DNSK+Barrier outperforms PHP+exp in Zhu and Xie (2022) and NSMPP by achieving a higher testing log-likelihood. We visualize the basis functions of learned influence kernel by DNSK+Barrier in Figure A.4 in Appendix. 6 CONCLUSION We propose a deep non-stationary kernel for spatio-temporal point processes using a low-rank parameterization based on displacement, which enables the model to be further low-rank when learning complicated influence kernel and significantly reduces model complexity. The non-negativity of the intensity is guaranteed by a log-barrier method that maintains the linearity of the conditional intensity function. Based on that, we propose a computationally efficient strategy for model estimation. The superior performance of our model is demonstrated using synthetic and real data sets. ACKNOWLEDGEMENT The work is partially supported by NSF DMS-2134037. Z.D. and Y.X. are partially supported by an NSF CAREER CCF-1650913, and NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, and DMS-1830210. X.C. is partially supported by NSF and the Alfred P. Sloan Foundation. A ADDITIONAL METHODOLOGY DETAILS A.1 DERIVATION OF EQUATION 4 We denote τ := t−t′, ν := s−s′, the variables t′ ∈ [0, T ], τ ∈ [0, τmax], s′ ∈ S and ν ∈ B(0, amax), where the sets S, B(0, amax) ⊂ R2. Viewing the spatial and temporal variables, i.e., (t′, τ) and (s′, ν), as left and right mode variables, respectively, the kernel function SVD (Mollenhauer et al., 2020; Mercer, 1909) of k gives that k(t′, τ, s′, ν) = ∞∑ k=1 σkgk(t ′, τ)hk(s ′, ν). (A.1) We assume that the SVD can be truncated at k ≤ K with a residual of ε for some small ε > 0, and this holds as long as the singular values σk decay sufficiently fast. To fulfill the approximate finite-rank representation, it suffices to have the scalars σk and the functions gk and hk so that the expansion approximates the kernel k, even if they are not SVD of the kernel. This leads to the following assumption: Assumption A.1. There exist coefficients σk, and functions gk(t′, τ), hk(s′, ν) s.t. k(t′, τ, s′, ν) = K∑ k=1 σkgk(t ′, τ)hk(s ′, ν) +O(ε). (A.2) To proceed, one can apply kernel SVD again to gk and hk respectively, and obtain left and right singular functions that potentially differ for different k. Here, we impose that across k = 1, · · · ,K, the singular functions of gk are the same (as shown below, being approximately same suffices) set of basis functions, that is, gk(t ′, τ) = ∞∑ l=1 βk,lψl(t ′)φl(τ). As we will truncate l to be up to a finite rank again (up to an O(ε) residual) we require the (approximately) shared singular modes only up to L. Similarly as above, technically it suffices to have a finite-rank expansion to achieve the O(ε) error without requiring them to be SVD, which leads to the following assumption where we assume the same condition for hk: Assumption A.2. For the gk and hk in equation A.2, up to an O(ε) error, (i) The K temporal kernel functions gk(t′, τ) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients βkl, and functions ψl(t′), φl(τ) for l = 1, · · · , L, s.t. gk(t ′, τ) = L∑ l=1 βklψl(t ′)φl(τ) +O(ε), k = 1, · · · ,K. (A.3) (ii) The K spatial kernel functions hk(s′, ν) can be approximated under a same set of left and right basis functions, i.e., there exist coefficients γkr, and functions ur(s′), vr(ν) for r = 1, · · · , R, s.t. hk(s ′, ν) = R∑ r=1 γkrur(t ′)vr(ν) +O(ε), r = 1, · · · , R. (A.4) Inserting equation A.3 and equation A.4 into equation A.2 gives the rank-truncated representation of the kernel function. Since K, L, R are fixed numbers, assuming boundedness of all the coefficients and functions, we have the representation with the final residual as O(ε), namely, k(t′, τ, s′, ν) = L∑ l=1 R∑ r=1 K∑ k=1 σkβklγkrψl(t ′)φl(τ)ur(t ′)vr(ν) +O(ε). Defining ∑K k=1 σkβklγkr as αlr leads to equation 4. A.2 ALGORITHMS Algorithm 1 Model parameter estimation Input: Training set X , batch size M , epoch number E, learning rate γ, constant a > 1 to update s in equation 6. Initialization: model parameter θ0, first epoch e = 0, s = s0. while e < E do for each batch with size M do 1. For 1D temporal point process, compute ℓ(θ), {λ(tct)}ct=1,...,|Ubar,t|. For spatio-temporal point process, compute ℓ(θ), {λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|. 2. Set b = min{λ(tct)}ct=1,...,|Ubar,t|−ϵ (or min{{λ(tct , scs)}ct=1,...,|Ubar,t|,cs=1,...,|Ubar,s|−ϵ), where ϵ is a small value to guarantee logarithm feasibility. 3. Compute L(θ) = −ℓ(θ) + 1wp(θ, b). 4. Update θe+1 ← θe − γ ∂L∂θe . 5. e← e+ 1, w ← w · a end for end while Algorithm 2 Synthetic data generation Input: Model λ(·), T,S, Upper bound of conditional intensity λ̄. Initialization: HT = ∅, t = 0, n = 0 while t < T do 1. Sample u ∼ Unif(0, 1). 2. t← t− lnu/λ̄. 3. Sample s ∼ Unif(S), D ∼ Unif(0, 1). 4. λ = λ(t, s|HT ). if Dλ̄ ≤ λ then n← n+ 1; tn = t, sn = s. HT ← HT ∪ {(tn, sn)}. end if end while if tn >= T then returnHT − {(tn, sn)} else returnHT end if A.3 GRID-BASED MODEL COMPUTATION In this section, we elaborate on the details of the grid-based efficient model computation. In Figure A.1, we visualize the procedure of computing the integrals of ∫ T−ti 0 φl(t)dt and ∫ S vr(s− si)ds in equation 8, respectively. Panel (a) illustrates the calculation of ∫ T−ti 0 φl(t)dt. As explained in Section 4.2, the evaluations of φl only happens on the grid Ut over [0, τmax] (since φl(t) = 0 when t > τmax). The value of F (t) = ∫ t 0 φl(τ)dτ on the grid can be obtained through numerical integration. Then given ti, the value of F (T − ti) = ∫ T−ti 0 φl(t)dt is calculated using linear interpolation of F on two adjacent grid points of T − ti. Panel (b) shows the computation of ∫ S vr(s− si)ds. Given si, ∫ S vr(s − si)ds = ∫ B(0,amax)∩{S−si} vr(s)ds since vr(s) = 0 when s > amax. Then B(0, amax) is discretized into the grid Us, and ∫ S vr(s− si)ds can be calculated based on the value of vr on the grid points in Us ∩ S − si (the deep red dots in Figure A.1(b)) using numerical integration. To evaluate the sensitivity of our model to the chosen grids, we compare the performance of DNSK+Barrier on 3D Data set 2 using grids with different resolutions. The quantitative results of testing log-likelihood and intensity prediction error are reported in Table A.1. We use |Ut| = 50, |Us| = 1500 for the experiments in the main paper. As we can see, the model shows similar performances when a higher grid resolution is used and works slightly less accurately but still better than other baselines with less number of grid points. It reveals that our choice of grid resolution is accurate enough to capture the complex dynamics of event occurrences for this non-stationary data, and the model performance is robust to different grid resolutions. In practice, the grids can be flexibly chosen to reach the balance of model accuracy and computational efficiency. For instance, the number of uniformly distributed grid points along one dimension can be chosen around O(n0), where n0 is the average number of events in one observed sequence. Note that |Ut| or |Us| would be far less than the total number of observed events because we use thousands of sequences (2000 in our synthetic experiments) for model learning. And the grid size can be even smaller when it comes to non-Lebesgue-measured space. A.4 DETAILS OF COMPUTATIONAL COMPLEXITY We provide the detailed analysis of the O(n) computation complexity of L(θ) in Section 4.3 as following: • Computation of log-summation. The evaluation of {ur}Rr=1 and {ψl}Ll=1 over n events costs O((R + L)n) complexity. The evaluation of {φl}Ll=1 is of O(L|Ut|) complexity since it relies on the grid Ut. With the assumption that the conditional intensity is bounded by a constant C in a finite time horizon (Lewis and Shedler, 1979; Daley et al., 2003; Zhu et al., 2022), for each fixed j, the cardinality of set {(i, j) | tj < ti ≤ tj + τmax} is less than Cτmax, which leads to a O(RCτmaxn) complexity of {vr}Rr=1 evaluation. • Computation of integral. The integration of {φl}Ll=1 only relies on numerical operations of {φl}Ll=1 on grids Ut without extra evaluations of neural networks. The integration of {vr}Rr=1 depends on the evaluation on grid Us of O(R|Us|) complexity. • Computation of barrier. {φl}Ll=1 on grid Ubar,t is estimated by numerical interpolation of previously computed {φl}Ll=1 on grid Ut. Additional neural network evaluations of {vr}Rr=1 cost no more than O(RCτmaxn) complexity. B DEEP NON-STATIONARY KERNEL FOR MARKED STPPS In marked STPPs (Reinhart, 2018), each observed event is associated with additional information describing event attribute, denoted as m ∈M ⊂ RdM . LetH = {(ti, si,mi)}ni=1 denote the event sequence. Given the observed history Ht = {(ti, si,mi) ∈ H|ti < t}, the conditional intensity function of a marked STPPs is similarly defined as: λ (t, s,m) = lim ∆t↓0,∆s↓0,∆m↓0 E [N([t, t+∆t]×B(s,∆s)×B(m,∆m)) | Ht] |B(s,∆s)||B(m,∆m)|∆t , where B(m,∆m) is a ball centered at m ∈ RdM with radius ∆m. The log-likelihood of observing H on [0, T ]× S ×M is given by ℓ(H) = n∑ i=1 log λ (ti, si,mi)− ∫ T 0 ∫ S ∫ M λ(t, s,m)dmdsdt. B.1 KERNEL INCORPORATING MARKS One of the salient features of our spatio-temporal kernel framework is that it can be conveniently adopted in modeling marked STPPs with additional sets of mark basis functions {gq, hq}Qq=1. We modify the influence kernel function k accordingly as following: k(t′, t− t′, s′, s− s′,m′,m) = Q∑ q=1 R∑ r=1 L∑ l=1 αlrqψl(t ′)φl(t− t′)ur(s′)vr(s− s′)gq(m′)hq(m). Here m′,m ∈M ⊂ RdM and {gq, hq :M→ R, q = 1, . . . , Q} represented by independent neural networks model the influence of historical mark m′ and current mark m, respectively. Since the mark spaceM is always categorical and the difference between m′ and m is of little practical meaning, we use gq and hq to model m′ and m separately instead of modeling m−m′. B.2 LOG-BARRIER AND MODEL COMPUTATION The conditional intensity for marked spatio-temporal point processes at (t, s,m) can be written as: λ(t, s,m) = µ+ ∑ l,r,q αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi)hq(m). We need to guarantee the non-negativity of λ over the space of [0, T ] × S ×M. When the total number of unique categorical mark inM is small, the log-barrier can be conveniently computed as the summation of λ on grids Ubar,t × Ubar,s ×M. In the following we focus on the case thatM is high-dimensional with O(n) number of unique marks. For model simplicity we use non-negative gq and hq in this case (which can be done by adding a non-negative activation function to the linear output layer in neural networks). We re-write λ(t, s,m) and denote as following: λ(t, s,m) = µ+ ∑ q ∑ l,r αlrq ∑ (ti,si,mi)∈Ht ψl(ti)φ(t− ti)ur(si)vr(s− si)gq(mi) ︸ ︷︷ ︸ F̂q(t,s) hq(m). Note that the function in the brackets are only with regard to t, s. We denote it as F̂q(t, s) (since it is in the rth rank of mark). Since hq(m) ≥ 0, the non-negativity of λ can be guaranteed by the non-negativity of F̂q(t, s). Thus we apply log-barrier method on F̂q(t, s). The log-barrier term becomes: p(θ, b) := − 1 Q|Ubar,t × Ubar,s| |Ubar,t|∑ ct=1 |Ubar,s|∑ cs=1 Q∑ q=1 log(F̂q(tct , scs)− b), Since our model is low-rank, the value of Q will not be large. For the model computation, the additional evaluations for {gq}Qq=1 on events is ofO(Qn) complexity and the evaluations for {hq}Qq=1 only depends on the unique number of marks which at most of O(n). The log-barrier method does not introduce extra evaluation in mark space. Thus the overall computation complexity for DNSK in marked STPPs is still O(n). C ADDITIONAL EXPERIMENTAL RESULTS In this section we provide details of data sets and experimental setup, together with additional experimental results. Synthetic data sets. To show the robustness of our model, we generate three temporal data sets and three spatio-temporal data sets using the following kernels: (i) 1D Data set 1 with exponential kernel: k(t′, t) = 0.8e−(t−t ′). (ii) 1D Data set 2 with non-stationary kernel: k(t′, t) = 0.3(0.5 + 0.5 cos(0.2t′))e−2(t−t ′). (iii) 1D Data set 3 with infinite rank kernel: k(t′, t) = 0.3 ∞∑ j=1 2−j ( 0.3 + cos(2 + ( t′ 5 )0.71.3(j + 1)π) ) e− 8(t−t′)2 25 j 2 (iv) 2D Data set 1 with exponential kernel: k(t′, t, s′, s) = 0.5e−1.5(t−t ′)e−0.8s ′ . (v) 3D Data set 1 with non-stationary inhibition kernel: k(t′, t, s′, s) = 0.3(1− 0.01t)e−2(t−t ′) 1 2πσ2s′ e − ∥s ′∥2 2σ2 s′ cos (10∥s− s′∥) 2πσ2s(1 + e 10(∥s−s′∥−0.5) e − ∥s−s ′∥2 2σ2s , where σs′ = 0.5, σs = 0.15. (vi) 3D Data set 2 with non-stationary mixture kernel: k(t′, t, s′, s) = 2∑ r=1 2∑ l=1 αrlur(s ′)vr(s− s′)ψl(t′)φl(t− t′) , where u1(s′) = 1−as(s′2+1), u2(s′) = 1−bs(s′2+1), v1(s−s′) = 12πσ21 e − ∥s−s ′∥2 2σ21 , v2(s− s′) = 1 2πσ22 e − ∥s−s ′−0.8∥2 2σ22 , ψ1(t ′) = 1 − att′, ψ2(t′) = 1 − btt′, φ1(t − t′) = e−β(t−t ′), φ2(t− t′) = (t− t′− 1) · I(t− t′ < 3), and as = 0.3, bs = 0.4, at = 0.02, bt = 0.02, σ1 = 0.2, σ2 = 0.3, β = 2, (α11, α12, α21, α22) = (0.6, 0.15, 0.225, 0.525). Note that kernel (iii) is the one we illustrated in Figure 1, which is of infinite rank according to the formulas. In Figure 1, the value matrix of k(t′, t) and k(t′, t − t′) are the kernel evaluations on a same 300× 300 uniform grid. As we can see, the rank of the value matrix of the same kernel k is reduced from 298 to 7 after changing to the displacement-based kernel parameterization. Details of Experimental setup. For RMTPP and NH we test embedding size of {32, 64, 128} and choose 64 for experiments. For THP we take the default experiment setting recommended by Zuo et al. (2020). For NSMPP we use the same model setting in Zhu et al. (2022) with rank 5. Each experiment is implemented by the following procedure: Given the data set, we split 90% of the sequences as training set and 10% as testing set. We use independent fully-connected neural networks with two-hidden layers for each basis function. Each layer contains 64 hidden nodes. The temporal rank of DNSK+Barrier is set to be 1 for synthetic data (i), (ii), (iv), (v), 2 for (vi), and 3 for (iii). The spatial rank is 1 for synthetic data (iv), (v) and 2 for (vi). The temporal and spatial rank for real data are both set to be 2 through cross validation. For each real data set, the τmax is chosen to be around T/4 and smax is 1 for each data set since the location space is normalized before training. The hyper-parameter of DNSK+Softplus are the same as DNSK+Barrier. For RMTPP, NH, and THP the batch size is 32 and the learning rate is 10−3. For others, the batch size is 64 and the learning rate is 10−1. The quantitative results are collected by running each experiment for 5 independent times. All experiments are implemented on Google Colaboratory (Pro version) with 25GB RAM and a Tesla T4 GPU. C.1 SYNTHETIC RESULTS WITH 2D & 3D KERNEL In this section we present additional experiment results for the synthetic data sets with 2D exponential and 3D non-stationary mixture kernel. Our proposed model successfully recovers the kernel and event conditional intensity in both case. Note that the recovery of 3D mixture kernel demonstrates the capability of our model to handle complex event dependency with mixture patterns by conveniently setting time and mark rank to be more than 1. C.2 ATLANTA TEXTUAL CRIME DATA WITH HIGH-DIMENSIONAL MARKS Figure A.4 visualizes the fitting and prediction results of DNSK+Barrier. Our model presents an decaying pattern in temporal effect and captures two different patterns of spatial influence for incidents in the northeast. Besides, the in-sample and out-of-sample intensity predictions demonstrate the ability of DNSK to characterize the event occurrences by showing different conditional intensities.
1. What is the main contribution of the paper regarding Spatio-Temporal Point Processes? 2. What are the strengths and weaknesses of the proposed DNSK model? 3. Do you have any questions or suggestions regarding the empirical results and comparisons with other methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors of this submission proposed a deep non-stationary kernel (DNSK) modeling Spatio-Temporal Point Processes (STPP) for potentially non-stationary events in continuous time and space. The authors focused on Hawkes process assuming that the influences from the past events are linearly additive and in turn modeled the conditional intensity function as λ ( t , s ) = μ + ∑ H i s t o r y k ( t , t ′ , s , s ′ ) , where k ( t , t ′ , s , s ′ ) is named as the influence kernel function, which captures the spatio-temporal dependency. Specifically, DNSK assumes a factorized kernel along time and space directions, as well as potential event "marks" when they are available to handle potential computational challenges. Furthermore, by considering parametrization with "displacement", the authors claimed that DNSK achieves low-rank kernels for more efficiency. The log-barrier method is introduced to preserve non-negativity of the conditional intensity function, also maintaining model interpretability and computational efficiency. Experiments were performed comparing some existing methods: RMTPP, Neural Hawkes (NH), Transformer Hawkes process (THP), Parametric Hawkes process (PHP), Neural spectral marked point processes (NSMPP) with the proposed DNSK without log-barrier but with a non-negative softplus activation function (DNSK+Softplus). The results showed that the proposed method can indeed capture the non-stationarity and achieve good model recovery and event prediction performances. Strengths And Weaknesses Efficient modeling non-stationary spatio-temporal processes is challenging. The authors followed recent efforts, in particular Zhu et al. 2022, Zou et al. 2020, etc. to develop a "displacement"-parametrized neural network based kernels in a Hawkes process model to address the potential computational challenges. The empirical results demonstrated the efficacy of their proposed DNSK. Clearer discussion besides the illustration in Figure 1 may be needed for clearer motivation and insights. For example, it may be necessary to discuss the pros and cons of factorized kernel assumptions and displacement-based parametrization. When introducing Figure 1, the authors may want to briefly mention the computational advantages and difference of the actual event prediction performance before and after using "displacement" on the kernels with the same rank. Although the figure did show that the displacement-based kernel matrix with rank 7 also has three peaks as the original kernel with rank 298, it may be necessary to provide numerical kernel performance measure. The authors may also want to discuss more about the reason for building up the kernels using linear combinations of MLP-based basis functions. As MLPs are assumed to have good approximation capability, is it still necessary to have the linear combination of these MLPs to make the kernel more complicated? The authors may want to make there empirical performance comparison results more consistent to have all the results reported from all the selected baselines. For example, it may be interesting to explore on the effectiveness of log-likelihood because the potential mismatch of log-likelihood and MSE regarding the THP performance on 1D Data set 2 and 1D Data set 3. For the real-world data, instead of showing the performances of some baselines, the authors may want to present the results from all the applicable baselines. Clarity, Quality, Novelty And Reproducibility DNSK appears as an extension of previous neural process models with "displacement"-based factorized neural kernels. Such an implementation does provide some performance improvement based on the reported experiments. Overall presentation is reasonable clear. The authors may want to consider further improve the presentation, for example, better justify the adopted kernels. There are also several language problems throughout the submission: for example. B ( s , Δ s ) seems to be a ball set with the center s and radius Δ s but is not defined; symbol f is used as transformation in page 3 but as pdf of the event in page 8; and many others throughout the submission.
ICLR
Title New Perspective on the Global Convergence of Finite-Sum Optimization Abstract Deep neural networks (DNNs) have shown great success in many machine learning tasks. Their training is challenging since the loss surface of the network architecture is generally non-convex, or even non-smooth. How and under what assumptions is guaranteed convergence to a global minimum possible? We propose a reformulation of the minimization problem allowing for a new recursive algorithmic framework. By using bounded style assumptions, we prove convergence to an ε-(global) minimum using Õ(1/ε) gradient computations. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. 1 INTRODUCTION In recent years, deep neural networks (DNNs) have shown a great success in many machine learning tasks. However, training these neural networks is challenging since the loss surface of network architecture is generally non-convex, or even non-smooth. Thus, there have been a long-standing question on how optimization algorithms may converge to a global minimum. Many previous work have investigated Gradient Descent algorithm and its stochastic version for over-parameterized setting (Arora et al., 2018; Soudry et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). Although these works have shown promising convergence results under certain assumptions, there is still a lack of new efficient methods that can guarantee global convergence for machine learning optimization. In this paper, we address this problem using a different perspective. Instead of analyzing the traditional finite-sum formulation, we adopt a new composite formulation that exactly depicts the structure of machine learning where a data set is used to learn a common classifier. Representation. Let { (x(i), y(i)) }n i=1 be a given training set with x(i) ∈ Rm, y(i) ∈ Rc, we investigate the following novel representation for deep learning tasks: min w∈Rd { F (w) = 1 n n∑ i=1 φi(h(w; i)) } , (1) where h(·; i) : Rd → Rc, i ∈ [n] = {1, . . . , n}, is the classifier for each input data x(i); and φi : Rc → R, i ∈ [n], is the loss function corresponding to each output data y(i). Our composite formulation (1) is a special case of the finite-sum problem minw∈Rd { F (w) = 1n ∑n i=1 f(w; i) } where each individual function f(·; i) is a composition of the loss function φi and the classifier h(·; i). This problem covers various important applications in machine learning, including logistic regression and neural networks. The most common approach for the finite-sum problem is using first-order methods such as (stochastic) gradient algorithms and making assumptions on the component functions f(·; i). As an alternative, we further investigate the structure of the loss function φi and narrow our assumption on the classifier h(·; i). For the purpose of this work, we first consider convex and Lipschitz-smooth loss functions while the classifiers can be non-convex. Using this representation, we propose a new framework followed by two algorithms that guarantee global convergence for the minimization problem. Algorithmic Framework. Representation (1) admits a new perspective. Our key insight is to (A) define z(t)i = h(w (t); i), where t is an iteration count of the outer loop in our algorithmic framework. Next (B), we want to approximate the change z(t+1)i − z (t) i in terms of a step size times the gradient ∇φi(z(t)i ) = (∂φi(z)/∂za)a∈[c] ∣∣ z=z (t) i , and (C) we approximate the change h(w(t+1); i)− h(w(t); i) in terms of the first order derivative H (t) i = (∂ha(w; i)/∂wb)a∈[c],b∈[d] ∣∣ w=w(t) . Finally, we combine (A), (B), and (C) to equate the approximations of z(t+1)i − z (t) i and h(w(t+1); i) − h(w(t); i). This leads to a recurrence on w(t) of the form w(t+1) = w(t) − η(t)v(t), where η(t) is a step size and which involves computing v(t) by solving a convex quadratic subproblem, see the details in Section 4. We explain two methods for approximating a solution for the derived subproblem. We show how to approximate the subproblem by transforming it into a strongly convex problem by adding a regularizer which can be solved in closed form. And we show how to use Gradient Descent (GD) on the subproblem to find an approximation v(t) of its solution. Convergence Analysis. Our analysis introduces non-standard bounded style assumptions. Intuitively, we assume that our convex and quadratic subproblem has a bounded solution. This allows us to prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)− F∗ ≤ ε, where F∗ is the global minimizer of F . Our analysis applies to a wide range of applications in machine learning: Our results hold for squared loss and softmax cross-entropy loss and applicable for a range of activation functions in DNN as we only assume that the h(·; i) are twice continuously differentiable and their Hessian matrices (second order derivatives) as well as their gradients (first order derivatives) are bounded. Contributions and Outline. Our contributions in this paper can be summarized as follows. • We propose a new representation (1) for analyzing the machine learning minimization problem. Our formulation utilizes the structure of machine learning tasks where a training data set of inputs and outputs is used to learn a common classifier. Related work in Section 2 shows how (1) is different from the classical finite-sum problem. • Based on the new representation we propose a novel algorithm framework. The algorithmic framework approximates a solution to a subproblem for which we show two distinct approaches. • For general DNNs and based on bounded style assumptions, we prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)−F∗ ≤ ε, where F∗ is the global minimizer of F . We emphasize that our focus is on developing a new theoretical foundation and that a translation to a practical implementation with empirical results is for future work. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 describes our setting and deep learning representation. Section 4 explains our key insight and derives our Framework 1. Section 5 presents our algorithms and their global convergence. All technical proofs are deferred to the Appendix. 2 RELATED WORK Formulation for Machine Learning Problems. The finite-sum problem is one of the most important and fundamental problems in machine learning. Analyzing this model is the most popular approach in the machine learning literature and it has been studied intensively throughout the years (Bottou et al., 2018; Reddi et al., 2016; Duchi et al., 2011b). Our new formulation (1) is a special case of the finite-sum problem, however, it is much more complicated than the previous model since it involves the data index i both inside the classifiers h(·; i) and the loss functions φi. For a comparison, previous works only consider a common loss function l(ŷ, y) for the predicted value ŷ and output data y (Zou et al., 2018; Soudry et al., 2018). Our modified version of loss function φi is a natural setting for machine learning. We note that when h(w; i) is the output produced by a model, our goal is to match this output with the corresponding target y(i). For that reason, the loss function for each output has a dependence on the output data y(i), and is denoted by φi. This fact reflects the natural setting of machine learning where the outputs are designed to fit different targets, and the optimization process depends on both outer function φi and inner functions h(·; i). This complication may potentially bring a challenge to theoretical analysis. However, with separate loss functions, we believe this model will help to exploit better the structure of machine learning problems and gain more insights on the neural network architecture. Other related composite optimization models are also investigated thoroughly in (Lewis & Wright, 2016; Zhang & Xiao, 2019; Tran-Dinh et al., 2020). Our model is different from these works as it does not have a common function wrapping outside the finite-sum term, as in (Lewis & Wright, 2016). Note that a broad class of variance reduction algorithms (e.g. SAG (Le Roux et al., 2012), SAGA (Defazio et al., 2014), SVRG (Johnson & Zhang, 2013), SARAH (Nguyen et al., 2017)) is designed specifically for the finite-sum formulation and is known to have certain benefits over Gradient Descent. In addition, the multilevel composite problem considered in (Zhang & Xiao, 2021) also covers empirical risk minimization problem. However our formulation does not match their work since our inner function h(w; i) is not an independent expectation over some data distribution, but a specific function that depends on the current data. Global Convergence for Neural Networks. A recent popular line of research is studying the dynamics of optimization methods on some specific neural network architectures. There are some early works that show the global convergence of Gradient Descent (GD) for simple linear network and two-layer network (Brutzkus et al., 2018; Soudry et al., 2018; Arora et al., 2019; Du et al., 2019b). Some further works extend these results to deep learning architectures (Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). These theoretical guarantees are generally proved for the case when the last output layer is fixed, which is not standard in practice. A recent work (Nguyen & Mondelli, 2020) prove the global convergence for GD when all layers are trained with some initial conditions. However, these results are for neural networks without bias neurons and it is unclear how these analyses can be extended to handle the bias terms of deep networks with different activations. Our novel framework and algorithms do not exclude learning bias layers as in (Nguyen & Mondelli, 2020). Using a different algorithm, Brutzkus et al. (2018) investigate Stochastic Gradient Descent (SGD) for two-layer networks in a restricted linearly separable data setting. This line of research continues with the works from Allen-Zhu et al. (2019); Zou et al. (2018) and later with Zou & Gu (2019). They justify the global convergence of SGD for deep neural networks for some probability depending on the number of input data and the initialization process. Over-Paramaterized Settings and other Assumptions for Machine Learning. Most of the modern learning architectures are over-parameterized, which means that the number of parameters are very large and often far more than the number of input data. Some recent works prove the global convergence of Gradient Descent when the number of neurons are extensively large, e.g. (Zou & Gu, 2019) requires Ω(n8) neurons for every hidden layer, and (Nguyen & Mondelli, 2020) improves this number to Ω(n3). If the initial point satisfies some special conditions, then they can show a better dependence of Ω(n). In Allen-Zhu et al. (2019), the authors initialize the weights using a random Gaussian distribution where the variance depends on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the dimension depends inversely on the tolerance . We will discuss how these over-paramaterized settings might be a necessary condition to develop our theory. Other standard assumptions for machine learning include the bounded gradient assumption (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016; Tran et al., 2021). It is also common to assume all the iterations of an algorithm stays in a bounded domain (Duchi et al., 2011a; Levy et al., 2018; Gürbüzbalaban et al., 2019; Reddi et al., 2018; Vaswani et al., 2021). Since we are analyzing a new composite formulation, it is understandable that our assumptions may also not be standard. However, we believe that there is a strong connection between our assumptions and the traditional setting of machine learning. We will discuss this point more clearly in Section 4. 3 BACKGROUND In this section, we discuss our formulation and notations in detail. Although this paper focuses on deep neural networks, our framework and theoretical analysis are general and applicable for other learning architectures. Deep Learning Representation. Let {(x(i), y(i))}ni=1 be a training data set where x(i) ∈ Rm is a training input and y(i) ∈ Rc is a training output. We consider a fully-connected neural network with L layers, where the l-th layer, l ∈ {0, 1, . . . , L}, has nl neurons. We represent layer 0-th and L-th layer as input and output layers, respectively, that is, n0 = d and nL = c. For l ∈ {1, . . . , L}, let W (l) ∈ Rnl−1×nl and b(l) ∈ Rnl , where {(W (l), b(l))Ll=1} represent the parameters of the neural network. A classifier h(w; i) is formulated as h(w; i) = W (L)>σL−1(W (L−1)>σL−2(. . . σ1(W (1)>x(i) + b(1)) . . . ) + b(L−1)) + b(L), wherew = vec({W (1), b(1), . . . ,W (L), b(L)}) ∈ Rd is the vectorized weight and {σl}L−1l=1 are some activation functions. The most common choices for machine learning are ReLU, sigmoid, hyperbolic tangent and softplus. For j ∈ [c], hj(·; i) : Rd → R denotes the component function of the output h(·; i), for each data i ∈ [n] respectively. Moreover, we define h∗i = arg minz∈Rc φi(z), i ∈ [n]. Loss Functions. The well-known loss functions in neural networks for solving classification and regression problems are softmax cross-entropy loss and square loss, respectively: (Softmax) Cross-Entropy Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = −y(i)> log(softmax(h(w; i))). (2) Squared Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = 1 2 ‖h(w; i)− y(i)‖2. (3) We provide some basic definitions in optimization theory to support our theory. Definition 1 (L-smooth). Function φ : Rc → R is Lφ-smooth if there exists a constant Lφ > 0 such that, ∀x1, x2 ∈ Rc, ‖∇φ(x1)−∇φ(x2)‖ ≤ Lφ‖x1 − x2‖. (4) Definition 2 (Convex). Function φ : Rc → R is convex if ∀x1, x2 ∈ Rc, φ(x1)− φ(x2) ≥ 〈∇φ(x2), x1 − x2〉. (5) The following corollary shows the properties of softmax cross-entropy loss (2) and squared loss (3). Corollary 1. For softmax cross-entropy loss (2) and squared loss (3), there exist functions h(·; i) : Rd → Rc and φi : Rc → R such that, for i ∈ [n], φi(z) is convex and Lφ-smooth with Lφ = 1, and f(w; i) = φi(h(w; i)) = φi(z) ∣∣ z=h(w;i) . (6) 4 NEW ALGORITHM FRAMEWORK 4.1 KEY INSIGHT We assume f(w; i) = φi(h(w; i)) with φi convex and Lφ-smooth. Our goal is to utilize the convexity of the outer function φi. In order to simplify notation, we write ∇zφi(h(w(t); i)) instead of ∇zφi(z) ∣∣ z=h(w(t);i) and denote z(t)i = h(w (t); i). Starting from the current weight w(t), we would like to find the next point w(t+1) that satisfies the following approximation for all i ∈ [n]: h(w(t+1); i) = z (t+1) i ≈ z (t) i − α (t) i ∇zφi(z (t) i ) = h(w (t); i)− α(t)i ∇zφi(h(w (t); i)). (7) We can see that this approximation is a “noisy” version of a gradient descent update for every function φi, simultaneously for all i ∈ [n]. In order to do this, we use the following update w(t+1) = w(t) − η(t)v(t), (8) where η(t) > 0 is a learning rate and v(t) is a search direction that helps us approximate equation (7). If the update term η(t)v(t) is small enough, and if h(·; i) has some nice smooth properties, then from basic calculus we have the following approximation: h(w(t+1); i) = h(w(t) − η(t)v(t); i) ≈ h(w(t); i)−H(t)i ( η(t)v(t) ) , (9) where H(t)i is a matrix in Rc×d with first-order derivatives. Motivated by approximations (7) and (9), we consider the following optimization problem: v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖H(t)i ( η(t)v ) − α(t)i ∇zφi(h(w (t); i))‖2. (10) Hence, by solving for the solution v(t)∗ of problem (10) we are able to find a search direction for the key approximation (7). This yields our new algorithmic Framework 1, see below. Framework 1 New Algorithm Framework Initialization: Choose an initial point w(0) ∈ Rd; for t = 0, 1, · · · , T − 1 do Solve for an approximation v(t) of the solution v(t)∗ of the problem in (10) v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 Update w(t+1) = w(t) − η(t)v(t) end for 4.2 TECHNICAL ASSUMPTIONS Assumption 1. The loss function φi is convex and Lφ-smooth for i ∈ [n]. Moreover, we assume that it is lower bounded, i.e. infz∈Rc φi(z) > −∞ for i ∈ [n]. We have shown the convexity and smoothness of squared loss and softmax cross-entropy loss in Section 3. The bounded property of φi is required in any algorithm for the well-definedness of (1). Now, in order to use the Taylor series approximation, we need the following assumption on the neural network architecture h: Assumption 2. We assume that h(·; i) is twice continuously differentiable for all i ∈ [n] (i.e. the second-order partial derivatives of all scalars hj(·; i) are continuous for all j ∈ [c] and i ∈ [n]), and that their Hessian matrices are bounded, that is, there exists a G > 0 such that for all w ∈ Rd, i ∈ [n] and j ∈ [c], ‖Mi,j(w)‖ = ‖Jw (∇whj(w; i))‖ ≤ G, (11) where Jw denotes the Jacobian1. Remark 1 (Relation to second-order methods). Although our analysis requires an assumption on the Hessian matrices of h(w; i), our algorithms do not use any second order information or try to approximate this information. Our theoretical analysis focused on the approximation of the classifier and the gradient information, therefore is not related to the second order type algorithms. It is currently unclear how to apply second order methods into our problem, however, this is an interesting research question to expand the scope of this work. 1For a continuously differentiable function g(w) : Rd → Rc we define the Jacobian Jw(g(w)) as the matrix (∂ga(w)/∂wb)a∈[c],b∈[d]. Assumption 2 allows us to apply a Taylor approximation of each function hj(·; i) with which we prove the following Lemma that bounds the error in equation (9): Lemma 1. Suppose that Assumption 2 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (12) where H (t) i = Jw(h(w; i))|w=w(t) ∈ R c×d (13) is defined as the Jacobian matrix of h(w; i) at w(t) and entries (t)i,j , j ∈ [c], of vector (t) i satisfy | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G. (14) In order to approximate (7) combined with (9), that is, to make sure the right hand sides of (7) and (9) are close to one another, we consider the optimization problem (10): v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2. The optimal value of problem (10) is equal to 0 if there exists a vector v(t)∗ satisfying η(t)H (t) i v (t) ∗ = α (t) i ∇zφi(h(w(t); i)) for every i ∈ [n]. Since the solution v (t) ∗ is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n · c constraints and d variables. In the overparameterized setting where dimension d is sufficiently large (d n · c) and there are no identical data, there exists almost surely a vector v(t)∗ that interpolates all the training set, see the Appendix for details. Let us note that an approximation of v(t)∗ serves as the search direction for Framework 1. For this reason, the solution v(t)∗ of problem (10) plays a similar role as a gradient in the search direction of (stochastic) gradient descent method. It is standard to assume a bounded gradient in the machine learning literature (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016). Motivated by these facts, we assume the following Assumption 3, which implies the existence of a near-optimal bounded solution of (10): Assumption 3. We consider an over-parameterized setting where dimension d is sufficiently large enough to interpolate all the data and the tolerance ε. We assume that there exists a bound V > 0 such that for ε > 0 and 0 ≤ t < T as in Framework 1, there exists a vector v̂(t)∗ε with ‖v̂(t)∗ε ‖2 ≤ V so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(t)i and ∇zφi(h(w(t); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. This setting is similar to previous works, e.g. Allen-Zhu et al. (2019). 5 NEW ALGORITHMS AND CONVERGENCE RESULTS 5.1 APPROXIMATING THE SOLUTION USING REGULARIZER Since problem (10) is convex and quadratic, we consider the following regularized problem: min v∈Rd { Ψ(v) = 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v‖2 } , (15) for some small ε > 0 and t ≥ 0. It is widely known that problem (15) is strongly convex, and has a unique minimizer v(t)∗ reg. The global minimizer satisfies∇vΨ(v(t)∗ reg) = 0. We have ∇vΨ(v) = 1 n n∑ i=1 [η(t)H (t) i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i))] + ε2 · v = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I ) v − ( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . Therefore, v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . (16) If ε2 is small enough, then v(t)∗ reg is a close approximation of the solution v (t) ∗ for problem (10). Our first algorithm updates Framework 1 based on this approximation. Algorithm 1 Solve for the exact solution of the regularized problem Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Update the search direction v(t) as the solution v(t)∗ reg of problem in (15): v(t) = v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) Update w(t+1) = w(t) − η(t)v(t) end for The following Lemma shows the relation between the regularized solution v(t)∗ reg and the optimal solution of the original convex problem v̂(t)∗ε . Lemma 2. For given ε > 0, suppose that Assumption 3 holds for bound V > 0. Then, for iteration 0 ≤ t < T , the optimal solution v(t)∗ reg of problem (15) satisfies ‖v(t)∗ reg‖2 ≤ 2 + V and 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 ≤ (1 + V 2 )ε2. (17) Based on Lemma 2, we guarantee the global convergence of Algorithm 1 and prove our first theorem. Since it is currently expensive to solve for the exact solution of problem (15), our algorithm serves as a theoretical method to obtain the global convergence for the finite-sum minimization. Theorem 1. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 13 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 3α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(3ε+ 2) 8α(1− 3α) [ c(4 + (V + 2)GD2)2 + 8 + 4V ] · ε. (18) We note that β is a constant for the purpose of choosing the number of iterations T . The analysis can be simplified by choosing β = 1 with T = 1ε . Notice that the common convergence criteria for finding a stationary point for non-convex problems is 1T ∑T t=1 ||∇F (wt)||2 ≤ O(ε). This criteria has been widely used in the existing literature for non-convex optimization problems. Our convergence criteria 1T ∑T t=1[F (wt) − F∗] ≤ O(ε) is slightly different, in order to find a global solution for non-convex problems. Our proof for Theorem 1 is novel and insightful. It is originally motivated by the Gradient Descent update (7) and the convexity of the loss functions φi. For this reason it may not be a surprise that Algorithm 1 can find an ε-global solution after O ( 1 ε ) iterations. However, computing the exact solution in every iteration might be extremely challenging, especially when the number of samples n is large. Therefore, we present a different approach to this problem in the following section. 5.2 APPROXIMATION USING GRADIENT DESCENT In this section, we use Gradient Descent (GD) algorithm to solve the strongly convex problem (15). It is well-known that if ψ(x) − µ2 ‖x‖ 2 is convex for ∀x ∈ Rc, then ψ(x) is µ-strongly convex (see e.g. Nesterov (2004)). Hence Ψ(·) is ε2-strongly convex. For each iteration t, we use GD to find a search direction v(t) which is sufficiently close to the optimal solution v(t)∗ reg in that ‖v(t) − v(t)∗ reg‖ ≤ ε. (19) Our Algorithm 2 is described as follows. Algorithm 2 Solve the regularized problem using Gradient Descent Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Use Gradient Descent algorithm to solve Problem (15) and find a solution v(t) that satisfies ‖v(t) − v(t)∗ reg‖ ≤ ε Update w(t+1) = w(t) − η(t)v(t) end for Since Algorithm 2 can only approximate a solution within some ε-preciseness, we need a supplemental assumption for the analysis of our next Theorem 2: Assumption 4. Let H(t)i be the Jacobian matrix defined in Lemma 1. We assume that there exists some constant H > 0 such that, for i ∈ [n], ε > 0, and 0 ≤ t < T as in Algorithm 2, ‖H(t)i ‖ ≤ H√ ε . (20) Assumption 4 requires a mild condition on the bounded Jacobian of h(w; i), and the upper bound may depend on ε. This flexibility allows us to accommodate a good dependence of ε for the theoretical analysis. We are now ready to present our convergence theorem for Algorithm 2. Theorem 2. Let w(t) be generated by Algorithm 2 where v(t) satisfies (19). We execute Algorithm 2 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0, Assumption 3 holds for V > 0 and Assumption 4 holds for H > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 14 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 4α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(4ε+ 3) 2α(1− 4α) [ D2H2 + c(2 + (V + ε2 + 2)GD2)2 + 2 + V ] · ε. Theorem 2 implies Corollary 2 which provides the computational complexity for Algorithm 2. Note that for (Stochastic) Gradient Descent, we derive the complexity in terms of component gradient calculations for the finite-sum problem (1). As an alternative, for Algorithm 2 we compare the number of component gradients in problem (15). Such individual gradient has the following form: ∇vψi(v) = η(t)H(t)i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i)). In machine learning applications, the gradient of f(·; i) is calculated using automatic differentiation (i.e. backpropagation). Since f(·; i) is the composition of the network structure h(·; i) and loss function φi(·), this process also computes the Jacobian matrix H(t)i and the gradient∇zφi(h(w(t); i)) at a specific weight w(t). Since matrix-vector multiplication computation is not expensive, the cost for computing the component gradient of problem (15) is similar to problem (1). Corollary 2. Suppose that the conditions in Theorem 2 hold with η(t) = D √ ε̂√ N for some D > 0 and 0 < ε̂ ≤ N (that is, we set ε = ε̂/N ), where N = eβLφ ∑n i=1 ‖h(w (0);i)−h∗i ‖ 2 n(1−4α)αβ + 7eβLφ[D2H2+c(2+(V+3)GD2)2+2+V ] 2α(1−4α) . Then, the total complexity to guarantee min0≤t≤T−1[F (w(t))−F∗] ≤ 1T ∑T−1 t=0 [F (w (t))−F∗] ≤ ε̂ is O ( nN 3β ε̂3 (D 2H2 + (ε̂2/N)) log(Nε̂ ) ) . Remark 2. Corollary 2 shows that O (1/ε̂) outer loop iterations are needed in order to reach an ε̂-global solution, and it proves that each iteration needs the equivalent of O ( n ε̂2 log( 1 ε̂ ) ) gradient computations for computing an approximate solution. In total, Algorithm 2 has total complexity O ( n ε̂3 log( 1 ε̂ ) ) for finding an ε̂-global solution. For a comparison, Stochastic Gradient Descent uses a total of O( 1ε2 ) gradient computations to find a stationary point satisfying E[‖∇F (ŵ)‖2] ≤ ε for non-convex problems (Ghadimi & Lan, 2013). Gradient Descent has a better complexity in terms of ε, i.e. O(nε ) such that ‖∇F (ŵ)‖ 2 ≤ ε (Nesterov, 2004). However, both methods may not be able to reach a global solution of (1). In order to guarantee global convergence for nonconvex settings, one may resort to use Polyak-Lojasiewicz (PL) inequality (Karimi et al., 2016; Gower et al., 2021). This assumption is widely known to be strong, which implies that every stationary point is also a global minimizer. 6 FURTHER DISCUSSION AND CONCLUSIONS This paper presents an alternative composite formulation for solving the finite-sum optimization problem. Our formulation allows a new way of exploiting the structure of machine learning problems and the convexity of squared loss and softmax cross entropy loss, and leads to a novel algorithmic framework that guarantees global convergence (when the outer loss functions are convex and Lipschitz-smooth). Our analysis is general and can be applied to various different learning architectures, in particular, our analysis and assumptions match practical neural networks; in recent years, there has been a great interest in the structure of deep learning architectures for over-parameterized settings (Arora et al., 2018; Allen-Zhu et al., 2019; Nguyen & Mondelli, 2020). Algorithm 2 demonstrates a gradient method to solve the regularized problem, however, other methods can be applied to our framework (e.g. conjugate gradient descent). Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. Possible research directions include more practical algorithm designs based on our Framework 1, and different related methods to solve the regularized problem and approximate the solution. This potentially leads to a new class of efficient algorithms for machine learning problems. This paper presents a new perspective to the research community. ETHICS STATEMENT This paper does not contain ethics concerns. APPENDIX A TABLE OF NOTATIONS Notation Meaning F∗ Global minimization function of F in (1) F∗ = minw∈Rd F (w) h∗i h ∗ i = arg minz∈Rc φi(z), i ∈ [n] v (t) ∗ Solution of the convex problem in (10) minv∈Rd 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 v(t) An approximation of v(t)∗ which is used as the search direction in Framework 1 v̂ (t) ∗ε A vector that satisfies 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 ≤ ε2 for some ε > 0 and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. v (t) ∗ reg Solution of the strongly convex problem in (15) minv∈Rd { 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 + ε 2 2 ‖v‖ 2 } B USEFUL RESULTS The following lemmas provide key tools for our results. Lemma 3 (Squared loss). Let b ∈ Rc and define φ(z) = 12‖z − b‖ 2 for z ∈ Rc. Then φ is convex and Lφ-smooth with Lφ = 1. Lemma 4 (Softmax cross-entropy loss). Let index a ∈ [c] and define φ(z) = log [ c∑ k=1 exp(zk − za) ] = log [ c∑ k=1 exp(w>k z) ] , for z = (z1, . . . , zc)> ∈ Rc, wherewk = ek−ea with ei representing the i-th unit vector (containing 1 at the i-th position and 0 elsewhere). Then φ is convex and Lφ-smooth with Lφ = 1. The following lemma is a standard result in (Nesterov, 2004). Lemma 5 ((Nesterov, 2004)). If φ is Lφ-smooth and convex, then for ∀z ∈ Rc, ‖∇φ(z)‖2 ≤ 2Lφ(φ(z)− φ(z∗)), (21) where z∗ = arg minz φ(z). The following useful derivations could be used later in our theoretical analysis. Since φi is convex, by Definition 2 we have φi(h(w; i)) ≥ φi(h(w′; i)) + 〈 ∇zφi(z) ∣∣∣ z=h(w′;i) , h(w; i)− h(w′; i) 〉 . (22) If φi is convex and Lφ-smooth, then by Lemma 5∥∥∥∥∇zφi(z)∣∣∣ z=h(w;i) ∥∥∥∥2 ≤ 2Lφ [φi(h(w; i))− φi(h∗i )] , (23) where h∗i = arg minz∈Rc φi(z). We compute gradients of f(w; i) in term of φi(h(w; i)). • Gradient of softmax cross-entropy loss: ∇φi(z) ∣∣ z=h(w;i) = ( ∂φi(z) ∂z1 ∣∣∣ z=h(w;i) , . . . , ∂φi(z) ∂zc ∣∣∣ z=h(w;i) )> , where for j ∈ [c], ∂φi(z) ∂zj ∣∣∣ z=h(w;i) = exp ( [h(w;i)]j−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j 6= I(y(i)) − ∑ k 6=I(y(i)) exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j = I(y(i)) . (24) • Gradient of squared loss: ∇φi(z) ∣∣ z=h(w;i) = h(w; i)− y(i). (25) C ADDITIONAL DISCUSSION C.1 ABOUT ASSUMPTION 2 We make a formal assumption for the case h(·; i) is closely approximated by k(·; i). Assumption 5. We assume that for all i ∈ [n] there exists some approximations k(w; i) : Rd → Rc such that |kj(w; i)− hj(w; i)| ≤ ε, ∀w ∈ Rd, i ∈ [n] and j ∈ [c], (26) where k(·; i) are twice continuously differentiable (i.e. the second-order partial derivatives of all scalars kj(·; i) are continuous for all i ∈ [n]), and that their Hessian matrices are bounded: ‖Mi,j(w)‖ = ‖Jw (∇wkj(w; i))‖ ≤ G, ∀w ∈ Rd, i ∈ [n] and j ∈ [c]. (27) Assumption 5 allows us to prove the following Lemma that bound the error in equation (9): Lemma 6. Suppose that Assumption 5 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , we have: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (28) where H(t)i is defined to be the Jacobian matrix of the approximation k(w; i) at w (t): H (t) i := Jwk(w; i)|w=w(t) = ∂k1(w;i) ∂w1 . . . ∂k1(w;i)∂wd . . . . . . . . . ∂kc(w;i) ∂w1 . . . ∂kc(w;i)∂wd ∣∣∣∣∣ w=w(t) ∈ Rc×d. (29) Additionally we have, | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. (30) Note that these result recover the case when h(·; i) is itself smooth. Hence we analyze our algorithms using the result of Lemma 6, which generalizes the result from Lemma 1. C.2 ABOUT ASSUMPTION 3 In this section, we justify the existence of the search direction in Assumption 3 (almost surely). We argue that there exists a vector v̂(t)∗ε satisfying 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. It is sufficient to find a vector v satisfying that η(t)H (t) i v = α (t) i ∇zφi(h(w (t); i)) for every i ∈ [n]. Since the solution v is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n ·c constraints and d variables. LetA and b be the following stacked matrix and vector: A = H (t) 1 η (t) . . . H (t) n η(t) ∈ Rn·c×d, and b = α (t) 1 ∇zφ1(h(w(t); i)) . . . α (t) n ∇zφn(h(w(t); i)) ∈ Rn·c, then the problem reduce to finding the solution of the equation Av = b. In the over-parameterized setting where dimension d is sufficiently large (d n · c), then rank A = n · c almost surely and there exists almost surely a vector v that interpolates all the training set. To demonstrate this fact easier, we consider a simple neural network where the classifier h(w; i) is formulated as h(w; i) = W (2)>σ(W (1)>x(i)), where c = 1, W (1) ∈ Rm×l and W (2) ∈ Rl×1, w = vec({W (1),W (2)}) ∈ Rd is the vectorized weight where d = l(m+ 1) and σ is sigmoid activation function. H (t) i is defined to be the Jacobian matrix of h(w; i) at w (t): H (t) i := Jwh(w; i)|w=w(t) = [ ∂h(w;i) ∂w1 . . . ∂h(w;i)∂wd ] ∣∣∣∣∣ w=w(t) ∈ R1×d, then A = η(t) H (t) 1 . . . H (t) n = η(t) ∂h(w;1) ∂w1 . . . ∂h(w;1)∂wd . . . . . . . . . ∂h(w;n) ∂w1 . . . ∂h(w;n)∂wd ∈ Rn×d. We want to show that A has full rank, almost surely. We consider the over-parameterized setting where the last layer has at least n neuron (i.e. l = n and the simple version when c = 1. We argue that rank of matrix A is greater than or equal to rank of the submatrix B created by the weights of the last layer W (2) ∈ Rn: B = ∂h(w;1) ∂W (2) 1 . . . ∂h(w;1) ∂W (2) n . . . . . . . . . ∂h(w;n) ∂W (2) 1 . . . ∂h1(w;n) ∂W (2) n ∈ Rn×n. Note that h(·, i) is a linear function of the last weight layers (in this simple case W (2) ∈ Rn and σ(W (1)>x(i)) ∈ Rn), we can compute the partial derivatives as follows: ∂h(w; i) ∂W (2) = σ(W (1)>x(i)); i ∈ [n]. Hence B = σ(W (1)>x(1)) . . . σ(W (1)>x(n)) ∈ Rn×n. Assuming that there are no identical data, and σ is the sigmoid activation, the set of weights W (1) that make matrix B degenerate has measure zero. Hence B has full rank almost surely, and we have the same conclusion for A. Therefore we are able to prove the almost surely existence of a solution v of the linear equation Av = b for simple two layers network. Using the same argument, this result can be generalized for larger neural networks where the dimension d is sufficiently large (d nc). C.3 INITIALIZATION EXAMPLE Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(0)i and ∇zφi(h(w(0); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. In order to accommodate the choice of learning rate η(0) = D √ ε in our theorems, in this section we describe a network initialization that satisfies ‖H(0)i ‖ = Θ ( 1√ ε ) where the gradient norm ‖∇zφi(h(w(0); i))‖ is at most constant order with respect to ε. To simplify the problem, we only consider small-dimension data and networks without activation. About the target vector: We choose φi to be the softmax cross-entropy loss. By Lemma 7 (see below), we have that the gradient norm is upper bounded by a constant c, where c is the output dimension of the problem and is not dependent on ε. Note that when we stack all gradients for n data points, then the size of new vector is still not dependent on ε. About the network architecture: For simplicity, we consider the following classification problem where • The input data is in R2. There are only two data points {x(1), x(2)}. Input data is bounded and non-degenerate (we will clarify this property later). • The output data is (categorical) in R2: {y(1) = (1, 0), y(2) = (0, 1)}. We want to have an over-parameterized setting where the dimension of weight vector is at least nc = 4. We consider a simple network with two layers, no biases and no activation functions. Let the number of neurons in the hidden layer bem. The flow of this network is (in) R2 → Rm → R2 (out). First, we consider the case where m = 1. • The first layer has 2 parameters (w1, w2) and only 1 neuron that outputs z(i) = w1x (i) 1 + w2x (i) 2 (the subscript is for the coordinate of input data x (i)). • The second layer has 2 parameters (w3, w4). The final output is h(w, i) = [w3(w1x (i) 1 + w2x (i) 2 ), w4(w1x (i) 1 + w2x (i) 2 )] > ∈ R2, with w = [w1, w2, w3, w4]> ∈ R4. This network satisfies that the Hessian matrices of h(w; i) are bounded. Let Q and b be the following stacked matrix and vector: Q = [ H (0) 1 H (0) 2 ] ∈ R4×4, and b = [ ∇zφ1(h(w(0); 1)) ∇zφ2(h(w(0); 2)) ] ∈ R4, Then we have the following: Q = Q(w) = [ H (0) 1 H (0) 2 ] = ∇w[w3(w1x(1)1 + w2x (1) 2 )] ∇w[w4(w1x(1)1 + w2x (1) 2 )] ∇w[w3(w1x(2)1 + w2x (2) 2 )] ∇w[w4(w1x(2)1 + w2x (2) 2 )] = w3x (1) 1 w3x (1) 2 w1x (1) 1 + w2x (1) 2 0 w4x (1) 1 w4x (1) 2 0 w1x (1) 1 + w2x (1) 2 w3x (2) 1 w3x (2) 2 w1x (2) 1 + w2x (2) 2 0 w4x (2) 1 w4x (2) 2 0 w1x (2) 1 + w2x (2) 2 . The determinant of this matrix is a polynomial of the weight w and the input data. Under some mild non-degenerate condition of the input data, we can choose some base point w′ that made this matrix invertible (note that if this condition is not satisfied, we can rescale/add a very small noise to the data - which is the common procedure in machine learning). Hence the system Qu = b always has a solution. Now we consider the following two initializations: 1. We choose to initialize the starting point at w(0) = 1√ ε w′ and note that Q(w) is a linear function of w and Q(w′) is independent of ε. Then the norm of matrix Q(w(0)) has the same scale with 1√ ε . 2. Instead of choosing m = 1, we consider an over-parameterized network where m = 1ε (recall that m is the number of neurons in the hidden layer). The hidden layer in this case is: z = z (i) 1 = w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 . . . z (i) m = w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 . The output layer is:{ y (i) 1 = z (i) 1 w (2) 1,1 + · · ·+ z (i) m w (2) m,1 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,1 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,1 y (i) 2 = z (i) 1 w (2) 1,2 + · · ·+ z (i) m w (2) m,2 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,2 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,2 with w = [w(1)1,1, . . . , w (1) 1,m, w (1) 2,1, . . . , w (1) 2,m, w (2) 1,1, w (2) 1,2, . . . , w (2) m,1, w (2) m,2] > ∈ R4m. Hence, Q(w) = w (2) 1,1x (1) 1 . . . w (2) m,1x (1) 1 w (2) 1,1x (1) 2 . . . w (2) m,1x (1) 2 z (1) 1 0 . . . z (1) m 0 w (2) 1,2x (1) 1 . . . w (2) m,2x (1) 1 w (2) 1,2x (1) 2 . . . w (2) m,2x (1) 2 0 z (1) 1 . . . 0 z (1) m w (2) 1,1x (2) 1 . . . w (2) m,1x (2) 1 w (2) 1,1x (2) 2 . . . w (2) m,1x (2) 2 z (2) 1 0 . . . z (2) m 0 w (2) 1,2x (2) 1 . . . w (2) m,2x (2) 1 w (2) 1,2x (2) 2 . . . w (2) m,2x (2) 2 0 z (2) 1 . . . 0 z (2) m . Hence, the number of (possibly) non-zero elements in each row is 3m = 3ε . For matrix A of rank r, we have ‖A‖2 ≤ ‖A‖F ≤ √ r‖A‖2. Since the rank of Q(w) is at most 4 (nc = 4, independent of ε), we only need to find the Frobenius norm of Q(w). We have ‖Q(w)‖F = √√√√ 4∑ i=1 4m∑ j=1 |qij |2. Let qmin and qmax be the element with smallest/largest magnitude of Q(w). Suppose that x(i) 6= (0, 0) and choose w 6= 0 such that z 6= 0, qmin > 0 and independent of ε. Hence, √ 8√ ε |qmin| ≤ ‖Q(w)‖F ≤ √ 12√ ε |qmax|. Hence, ‖Q(w)‖ = Θ ( 1√ ε ) . Therefore this simple network initialization supports the dependence on ε for our Assumption 3. We note that a similar setting is found in (Allen-Zhu et al., 2019), where the authors initialize the weights using a random Gaussian distribution with a variance depending on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the number of neurons m depends inversely on the tolerance ε. Lemma 7. For softmax cross-entropy loss, and x = h(w; i) ∈ Rc, for ∀w ∈ Rd and i ∈ [n], we have ∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 ≤ c. (31) Proof. By (24), we have for i = 1, . . . , n, • For j 6= I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1. • For j = I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = (∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) )2 = ( ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1 Hence, for i = 1, . . . , n,∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 = c∑ j=1 ( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 ≤ c. This completes the proof. D PROOFS OF LEMMAS AND COROLLARY 1 PROOF OF LEMMA 1 Proof. Since h(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs hj(·; i) where j ∈ [c] and i ∈ [n]: hj(w (t+1); i) = hj(w (t) − η(t)v(t); i) = hj(w (t); i)− Jwhj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (32) where Mi,j(w̃(t)) is the Hessian matrices of hj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. This leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖ (11) ≤ 1 2 (η(t))2‖v(t)‖2G, j ∈ [c]. PROOF OF LEMMA 2 Proof. From Assumption 3, we know that there exists v̂(t)∗ε so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2, and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. Hence, 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v̂(t)∗ε ‖2 ≤ ε2 + ε2 2 V = (1 + V 2 )ε2. Since v(t)∗ reg is the optimal solution of the problem in (15) for 0 ≤ t < T , we have 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v(t)∗ reg‖2 ≤ (1 + V 2 )ε2. Therefore, we have (17) and ‖v(t)∗ reg‖2 ≤ 2 + V for 0 ≤ t < T . PROOF OF LEMMA 3 Proof. 1. We want to show that for any α ∈ [0, 1] φ(αz1 + (1− α)z2) ≤ αφ(z1) + (1− α)φ(z2), ∀z1, z2 ∈ Rc, (33) in order to have the convexity of φ with respect to z (see (Nesterov, 2004)). For any α ∈ [0, 1], we have for ∀z1, z2 ∈ Rc, α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − ‖α(z1 − b) + (1− α)(z2 − b)‖2 = α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − α2‖z1 − b‖2 − (1− α)2‖z2 − b‖2 − 2α(1− α)〈z1 − b, z2 − b〉 ≥ α(1− α)‖z1 − b‖2 + (1− α)α‖z2 − b‖2 − 2α(1− α)‖z1 − b‖ · ‖z2 − b‖ = α(1− α) (‖z1 − b‖ − ‖z2 − b‖)2 ≥ 0, where the first inequality follows according to Cauchy-Schwarz inequality 〈a, b〉 ≤ ‖a‖·‖b‖. Hence, 1 2 ‖αz1 + (1− α)z2 − b‖2 ≤ α 2 ‖z1 − b‖2 + (1− α) 2 ‖z2 − b‖2. Therefore, (33) implies the convexity of φ with respect to z. 2. We want to show that ∃Lφ > 0 such that ‖∇φ(z1)−∇φ(z2)‖ ≤ Lφ‖z1 − z2‖, ∀z1, z2 ∈ Rc. (34) Notice that∇φ(z) = z − b, then clearly ∀z1, z2 ∈ Rc, ‖∇φ(z1)−∇φ(z2)‖ = ‖z1 − z2‖. Therefore, (34) implies the Lφ-smoothness of φ with respect to z with Lφ = 1. PROOF OF LEMMA 4 Proof. 1. For ∀z1, z2 ∈ Rc and 1 ≤ k ≤ c, denote uk,1 = exp(w>k z1) and uk,2 = exp(w>k z2) and using Holder inequality c∑ k=1 ak · bk ≤ ( c∑ k=1 |ak|p ) 1 p ( c∑ k=1 |bk|q ) 1 q , where 1 p + 1 q = 1, (35) we have φ(αz1 + (1− α)z2) = log [ c∑ k=1 exp(w>k (αz1 + (1− α)z2)) ] = log [ c∑ k=1 uαk,1 · u (1−α) k,2 ] (35) ≤ log ( c∑ k=1 u α· 1α k,1 )α( c∑ k=1 u (1−α)· 1 (1−α) k,2 )1−α = α log [ c∑ k=1 exp(w>k z1) ] + (1− α) log [ c∑ k=1 exp(w>k z2) ] = αφ(z1) + (1− α)φ(z2), where the first inequality since log(x) is an increasing function for ∀x > 0 and exp(v) > 0 for ∀v ∈ R. Therefore, (33) implies the convexity of φ with respect to z. 2. Note that ‖∇2φ(z)‖ ≤ Lφ if and only if φ(z) is Lφ-smooth (see (Nesterov, 2004)). First, we compute gradient of φ(z): • For i 6= a: ∂φ(z) ∂zi = exp(zi − za)∑c k=1 exp(zk − za) . • For i = a: ∂φ(z) ∂zi = − ∑ k 6=a exp(zk − za)∑c k=1 exp(zk − za) = − ∑c k=1 exp(zk − za) + 1∑c k=1 exp(zk − za) = −1 + 1∑c k=1 exp(zk − za) = −1 + exp(zi − za)∑c k=1 exp(zk − za) . We then calculate ∂ 2φ(z) ∂zj∂zi = ∂∂zj ( ∂φ(z) ∂zi ) • For i = j: ∂2φ(z) ∂zj∂zi = exp(zi − za)[ ∑c k=1 exp(zk − za)]− exp(zi − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 = exp(zi − za)[ ∑c k=1 exp(zk − za)− exp(zi − za)] [ ∑c k=1 exp(zk − za)]2 . • For i 6= j: ∂2φ(z) ∂zj∂zi = − exp(zj − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 . Denote that yi = exp(zi − za) ≥ 0, i ∈ [c], we have: • For i = j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = ∣∣∣∣yi(∑ck=1 yk − yi)(∑ck=1 yk)2 ∣∣∣∣ . • For i 6= j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = |yiyj |(∑ck=1 yk)2 . Recall that for matrix A = (aij) ∈ Rc×c: ‖A‖2 ≤ ‖A‖2F = ∑c i=1 ∑c j=1 |aij |2. We have: c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1(∑ck=1 yk)4 y2i ( c∑ k=1 yk − yi)2 + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 y2i ( c∑ k=1 yk) 2 − 2y2i c∑ k=1 yk.yi + y 4 i + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 [ y2i ( c∑ k=1 yk) 2 − 2y3i c∑ k=1 yk + y 2 i c∑ k=1 y2k ] Therefore, ‖∇2φ(z)‖2 ≤ c∑ i=1 c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1 ( ∑c k=1 yk) 4 [ ( c∑ i=1 y2i )( c∑ k=1 yk) 2 − 2( c∑ i=1 y3i )( c∑ k=1 yk) + ( c∑ i=1 y2i )( c∑ k=1 y2k) ] ≤ ( ∑c i=1 y 2 i )( ∑c k=1 yk) 2 ( ∑c k=1 yk) 4 ≤ ( ∑c k=1 yk) 4 ( ∑c k=1 yk) 4 = 1, where the last inequality holds since ( c∑ i=1 y2i )( c∑ k=1 y2k) ≤ ( c∑ i=1 y3i )( c∑ k=1 yk)⇔ ( c∑ k=1 y2k) ≤ √√√√( c∑ i=1 y3i )( c∑ k=1 yk), which follows by the application of Holder inequality (35) with p = 2, q = 2, ak = y 3/2 k , and bk = y 1/2 k (Note that yk ≥ 0, k ∈ [c]). Hence, ‖∇2φ(z)‖ ≤ Lφ with Lφ = 1 which is equivalent to Lφ-smoothness of φ. PROOF OF LEMMA 6 Proof. Since k(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs kj(·; i) where j ∈ [c] and i ∈ [n]: kj(w (t+1); i) = kj(w (t) − η(t)v(t); i) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (36) where Mi,j(w̃(t)) is the Hessian matrices of kj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. Shifting this back to the original function hj(·; i) we have: hj(w (t+1); i) = kj(w (t+1); i) + (hj(w (t+1); i)− kj(w(t+1); i)) (36) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)), = hj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), which leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ + |hj(w(t+1); i)− kj(w(t+1); i)|+ |kj(w(t); i)− hj(w(t); i)| (26) ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣+ 2ε, ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖+ 2ε (11) ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. PROOF OF COROLLARY 1 Proof. The proof of this corollary follows directly by the applications of Lemmas 3 and 4. E TECHNICAL PROOFS FOR THEOREM 1 Lemma 8. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0, and v(t) = v (t) ∗ reg. Consider η(t) = D √ ε for some D > 0 and ε > 0. For i ∈ [n] and 0 ≤ t < T , we have ‖ (t)i ‖ 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. (37) Proof. From (14), for i ∈ [n], j ∈ [c], and for 0 ≤ t < T , by Lemma 1 and Lemma 6 we have | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε ≤ 1 2 (V + 2)GD2ε+ 2ε = 1 2 ε(4 + (V + 2)GD2), where the last inequality follows by the fact ‖v(t)‖2 = ‖v(t)∗ reg‖2 ≤ 2 + V of Lemma 2 and η(t) = D √ ε. Hence, ‖ (t)i ‖ 2 = c∑ j=1 | (t)i,j | 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. Lemma 9. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D
1. What is the focus of the paper regarding optimization problems? 2. What are the strengths of the proposed approach, particularly in its novelty and transformation? 3. What are the weaknesses of the paper regarding its significance and usefulness in solving real-world problems? 4. How does the reviewer suggest improving the paper, such as including empirical analysis?
Summary Of The Paper Review
Summary Of The Paper The authors formulate a way to transform finite-sum optimization problems in a proxy strongly convex problem, and prove that it converges to a global minimum in a number of gradient steps that scales inverse quadratically with the tolerance. Review Strengths: the paper is fairly clear, and the proposed transformation is certainly a novel way to solve a common class of problems in the machine learning community. Weaknesses: I'm finding it extremely difficult to evaluate the significance of this work. I respect that there's value in providing a new perspective on an old problem (with correspondingly different asymptotic bounds on its performance), but unless this proposed method is demonstrably useful at solving a concrete problem, I'm not sure if it's significant enough for ICLR. Specifically, the version of this paper that I would probably feel good about accepting would kick all of sections 3, 4, and 5 to appendices, would succinctly state Algorithms 1 and 2, and then would spend the rest of the paper evaluating these algorithms on real problems. At the same time, I don't want to discourage this kind of work, so I would honestly be satisfied seeing this proposed algorithm applied to any problem, but as it stands, I don't think I can accept. edit: raising score to 6 after author's updates---still would like to see empirical analysis.
ICLR
Title New Perspective on the Global Convergence of Finite-Sum Optimization Abstract Deep neural networks (DNNs) have shown great success in many machine learning tasks. Their training is challenging since the loss surface of the network architecture is generally non-convex, or even non-smooth. How and under what assumptions is guaranteed convergence to a global minimum possible? We propose a reformulation of the minimization problem allowing for a new recursive algorithmic framework. By using bounded style assumptions, we prove convergence to an ε-(global) minimum using Õ(1/ε) gradient computations. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. 1 INTRODUCTION In recent years, deep neural networks (DNNs) have shown a great success in many machine learning tasks. However, training these neural networks is challenging since the loss surface of network architecture is generally non-convex, or even non-smooth. Thus, there have been a long-standing question on how optimization algorithms may converge to a global minimum. Many previous work have investigated Gradient Descent algorithm and its stochastic version for over-parameterized setting (Arora et al., 2018; Soudry et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). Although these works have shown promising convergence results under certain assumptions, there is still a lack of new efficient methods that can guarantee global convergence for machine learning optimization. In this paper, we address this problem using a different perspective. Instead of analyzing the traditional finite-sum formulation, we adopt a new composite formulation that exactly depicts the structure of machine learning where a data set is used to learn a common classifier. Representation. Let { (x(i), y(i)) }n i=1 be a given training set with x(i) ∈ Rm, y(i) ∈ Rc, we investigate the following novel representation for deep learning tasks: min w∈Rd { F (w) = 1 n n∑ i=1 φi(h(w; i)) } , (1) where h(·; i) : Rd → Rc, i ∈ [n] = {1, . . . , n}, is the classifier for each input data x(i); and φi : Rc → R, i ∈ [n], is the loss function corresponding to each output data y(i). Our composite formulation (1) is a special case of the finite-sum problem minw∈Rd { F (w) = 1n ∑n i=1 f(w; i) } where each individual function f(·; i) is a composition of the loss function φi and the classifier h(·; i). This problem covers various important applications in machine learning, including logistic regression and neural networks. The most common approach for the finite-sum problem is using first-order methods such as (stochastic) gradient algorithms and making assumptions on the component functions f(·; i). As an alternative, we further investigate the structure of the loss function φi and narrow our assumption on the classifier h(·; i). For the purpose of this work, we first consider convex and Lipschitz-smooth loss functions while the classifiers can be non-convex. Using this representation, we propose a new framework followed by two algorithms that guarantee global convergence for the minimization problem. Algorithmic Framework. Representation (1) admits a new perspective. Our key insight is to (A) define z(t)i = h(w (t); i), where t is an iteration count of the outer loop in our algorithmic framework. Next (B), we want to approximate the change z(t+1)i − z (t) i in terms of a step size times the gradient ∇φi(z(t)i ) = (∂φi(z)/∂za)a∈[c] ∣∣ z=z (t) i , and (C) we approximate the change h(w(t+1); i)− h(w(t); i) in terms of the first order derivative H (t) i = (∂ha(w; i)/∂wb)a∈[c],b∈[d] ∣∣ w=w(t) . Finally, we combine (A), (B), and (C) to equate the approximations of z(t+1)i − z (t) i and h(w(t+1); i) − h(w(t); i). This leads to a recurrence on w(t) of the form w(t+1) = w(t) − η(t)v(t), where η(t) is a step size and which involves computing v(t) by solving a convex quadratic subproblem, see the details in Section 4. We explain two methods for approximating a solution for the derived subproblem. We show how to approximate the subproblem by transforming it into a strongly convex problem by adding a regularizer which can be solved in closed form. And we show how to use Gradient Descent (GD) on the subproblem to find an approximation v(t) of its solution. Convergence Analysis. Our analysis introduces non-standard bounded style assumptions. Intuitively, we assume that our convex and quadratic subproblem has a bounded solution. This allows us to prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)− F∗ ≤ ε, where F∗ is the global minimizer of F . Our analysis applies to a wide range of applications in machine learning: Our results hold for squared loss and softmax cross-entropy loss and applicable for a range of activation functions in DNN as we only assume that the h(·; i) are twice continuously differentiable and their Hessian matrices (second order derivatives) as well as their gradients (first order derivatives) are bounded. Contributions and Outline. Our contributions in this paper can be summarized as follows. • We propose a new representation (1) for analyzing the machine learning minimization problem. Our formulation utilizes the structure of machine learning tasks where a training data set of inputs and outputs is used to learn a common classifier. Related work in Section 2 shows how (1) is different from the classical finite-sum problem. • Based on the new representation we propose a novel algorithm framework. The algorithmic framework approximates a solution to a subproblem for which we show two distinct approaches. • For general DNNs and based on bounded style assumptions, we prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)−F∗ ≤ ε, where F∗ is the global minimizer of F . We emphasize that our focus is on developing a new theoretical foundation and that a translation to a practical implementation with empirical results is for future work. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 describes our setting and deep learning representation. Section 4 explains our key insight and derives our Framework 1. Section 5 presents our algorithms and their global convergence. All technical proofs are deferred to the Appendix. 2 RELATED WORK Formulation for Machine Learning Problems. The finite-sum problem is one of the most important and fundamental problems in machine learning. Analyzing this model is the most popular approach in the machine learning literature and it has been studied intensively throughout the years (Bottou et al., 2018; Reddi et al., 2016; Duchi et al., 2011b). Our new formulation (1) is a special case of the finite-sum problem, however, it is much more complicated than the previous model since it involves the data index i both inside the classifiers h(·; i) and the loss functions φi. For a comparison, previous works only consider a common loss function l(ŷ, y) for the predicted value ŷ and output data y (Zou et al., 2018; Soudry et al., 2018). Our modified version of loss function φi is a natural setting for machine learning. We note that when h(w; i) is the output produced by a model, our goal is to match this output with the corresponding target y(i). For that reason, the loss function for each output has a dependence on the output data y(i), and is denoted by φi. This fact reflects the natural setting of machine learning where the outputs are designed to fit different targets, and the optimization process depends on both outer function φi and inner functions h(·; i). This complication may potentially bring a challenge to theoretical analysis. However, with separate loss functions, we believe this model will help to exploit better the structure of machine learning problems and gain more insights on the neural network architecture. Other related composite optimization models are also investigated thoroughly in (Lewis & Wright, 2016; Zhang & Xiao, 2019; Tran-Dinh et al., 2020). Our model is different from these works as it does not have a common function wrapping outside the finite-sum term, as in (Lewis & Wright, 2016). Note that a broad class of variance reduction algorithms (e.g. SAG (Le Roux et al., 2012), SAGA (Defazio et al., 2014), SVRG (Johnson & Zhang, 2013), SARAH (Nguyen et al., 2017)) is designed specifically for the finite-sum formulation and is known to have certain benefits over Gradient Descent. In addition, the multilevel composite problem considered in (Zhang & Xiao, 2021) also covers empirical risk minimization problem. However our formulation does not match their work since our inner function h(w; i) is not an independent expectation over some data distribution, but a specific function that depends on the current data. Global Convergence for Neural Networks. A recent popular line of research is studying the dynamics of optimization methods on some specific neural network architectures. There are some early works that show the global convergence of Gradient Descent (GD) for simple linear network and two-layer network (Brutzkus et al., 2018; Soudry et al., 2018; Arora et al., 2019; Du et al., 2019b). Some further works extend these results to deep learning architectures (Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). These theoretical guarantees are generally proved for the case when the last output layer is fixed, which is not standard in practice. A recent work (Nguyen & Mondelli, 2020) prove the global convergence for GD when all layers are trained with some initial conditions. However, these results are for neural networks without bias neurons and it is unclear how these analyses can be extended to handle the bias terms of deep networks with different activations. Our novel framework and algorithms do not exclude learning bias layers as in (Nguyen & Mondelli, 2020). Using a different algorithm, Brutzkus et al. (2018) investigate Stochastic Gradient Descent (SGD) for two-layer networks in a restricted linearly separable data setting. This line of research continues with the works from Allen-Zhu et al. (2019); Zou et al. (2018) and later with Zou & Gu (2019). They justify the global convergence of SGD for deep neural networks for some probability depending on the number of input data and the initialization process. Over-Paramaterized Settings and other Assumptions for Machine Learning. Most of the modern learning architectures are over-parameterized, which means that the number of parameters are very large and often far more than the number of input data. Some recent works prove the global convergence of Gradient Descent when the number of neurons are extensively large, e.g. (Zou & Gu, 2019) requires Ω(n8) neurons for every hidden layer, and (Nguyen & Mondelli, 2020) improves this number to Ω(n3). If the initial point satisfies some special conditions, then they can show a better dependence of Ω(n). In Allen-Zhu et al. (2019), the authors initialize the weights using a random Gaussian distribution where the variance depends on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the dimension depends inversely on the tolerance . We will discuss how these over-paramaterized settings might be a necessary condition to develop our theory. Other standard assumptions for machine learning include the bounded gradient assumption (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016; Tran et al., 2021). It is also common to assume all the iterations of an algorithm stays in a bounded domain (Duchi et al., 2011a; Levy et al., 2018; Gürbüzbalaban et al., 2019; Reddi et al., 2018; Vaswani et al., 2021). Since we are analyzing a new composite formulation, it is understandable that our assumptions may also not be standard. However, we believe that there is a strong connection between our assumptions and the traditional setting of machine learning. We will discuss this point more clearly in Section 4. 3 BACKGROUND In this section, we discuss our formulation and notations in detail. Although this paper focuses on deep neural networks, our framework and theoretical analysis are general and applicable for other learning architectures. Deep Learning Representation. Let {(x(i), y(i))}ni=1 be a training data set where x(i) ∈ Rm is a training input and y(i) ∈ Rc is a training output. We consider a fully-connected neural network with L layers, where the l-th layer, l ∈ {0, 1, . . . , L}, has nl neurons. We represent layer 0-th and L-th layer as input and output layers, respectively, that is, n0 = d and nL = c. For l ∈ {1, . . . , L}, let W (l) ∈ Rnl−1×nl and b(l) ∈ Rnl , where {(W (l), b(l))Ll=1} represent the parameters of the neural network. A classifier h(w; i) is formulated as h(w; i) = W (L)>σL−1(W (L−1)>σL−2(. . . σ1(W (1)>x(i) + b(1)) . . . ) + b(L−1)) + b(L), wherew = vec({W (1), b(1), . . . ,W (L), b(L)}) ∈ Rd is the vectorized weight and {σl}L−1l=1 are some activation functions. The most common choices for machine learning are ReLU, sigmoid, hyperbolic tangent and softplus. For j ∈ [c], hj(·; i) : Rd → R denotes the component function of the output h(·; i), for each data i ∈ [n] respectively. Moreover, we define h∗i = arg minz∈Rc φi(z), i ∈ [n]. Loss Functions. The well-known loss functions in neural networks for solving classification and regression problems are softmax cross-entropy loss and square loss, respectively: (Softmax) Cross-Entropy Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = −y(i)> log(softmax(h(w; i))). (2) Squared Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = 1 2 ‖h(w; i)− y(i)‖2. (3) We provide some basic definitions in optimization theory to support our theory. Definition 1 (L-smooth). Function φ : Rc → R is Lφ-smooth if there exists a constant Lφ > 0 such that, ∀x1, x2 ∈ Rc, ‖∇φ(x1)−∇φ(x2)‖ ≤ Lφ‖x1 − x2‖. (4) Definition 2 (Convex). Function φ : Rc → R is convex if ∀x1, x2 ∈ Rc, φ(x1)− φ(x2) ≥ 〈∇φ(x2), x1 − x2〉. (5) The following corollary shows the properties of softmax cross-entropy loss (2) and squared loss (3). Corollary 1. For softmax cross-entropy loss (2) and squared loss (3), there exist functions h(·; i) : Rd → Rc and φi : Rc → R such that, for i ∈ [n], φi(z) is convex and Lφ-smooth with Lφ = 1, and f(w; i) = φi(h(w; i)) = φi(z) ∣∣ z=h(w;i) . (6) 4 NEW ALGORITHM FRAMEWORK 4.1 KEY INSIGHT We assume f(w; i) = φi(h(w; i)) with φi convex and Lφ-smooth. Our goal is to utilize the convexity of the outer function φi. In order to simplify notation, we write ∇zφi(h(w(t); i)) instead of ∇zφi(z) ∣∣ z=h(w(t);i) and denote z(t)i = h(w (t); i). Starting from the current weight w(t), we would like to find the next point w(t+1) that satisfies the following approximation for all i ∈ [n]: h(w(t+1); i) = z (t+1) i ≈ z (t) i − α (t) i ∇zφi(z (t) i ) = h(w (t); i)− α(t)i ∇zφi(h(w (t); i)). (7) We can see that this approximation is a “noisy” version of a gradient descent update for every function φi, simultaneously for all i ∈ [n]. In order to do this, we use the following update w(t+1) = w(t) − η(t)v(t), (8) where η(t) > 0 is a learning rate and v(t) is a search direction that helps us approximate equation (7). If the update term η(t)v(t) is small enough, and if h(·; i) has some nice smooth properties, then from basic calculus we have the following approximation: h(w(t+1); i) = h(w(t) − η(t)v(t); i) ≈ h(w(t); i)−H(t)i ( η(t)v(t) ) , (9) where H(t)i is a matrix in Rc×d with first-order derivatives. Motivated by approximations (7) and (9), we consider the following optimization problem: v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖H(t)i ( η(t)v ) − α(t)i ∇zφi(h(w (t); i))‖2. (10) Hence, by solving for the solution v(t)∗ of problem (10) we are able to find a search direction for the key approximation (7). This yields our new algorithmic Framework 1, see below. Framework 1 New Algorithm Framework Initialization: Choose an initial point w(0) ∈ Rd; for t = 0, 1, · · · , T − 1 do Solve for an approximation v(t) of the solution v(t)∗ of the problem in (10) v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 Update w(t+1) = w(t) − η(t)v(t) end for 4.2 TECHNICAL ASSUMPTIONS Assumption 1. The loss function φi is convex and Lφ-smooth for i ∈ [n]. Moreover, we assume that it is lower bounded, i.e. infz∈Rc φi(z) > −∞ for i ∈ [n]. We have shown the convexity and smoothness of squared loss and softmax cross-entropy loss in Section 3. The bounded property of φi is required in any algorithm for the well-definedness of (1). Now, in order to use the Taylor series approximation, we need the following assumption on the neural network architecture h: Assumption 2. We assume that h(·; i) is twice continuously differentiable for all i ∈ [n] (i.e. the second-order partial derivatives of all scalars hj(·; i) are continuous for all j ∈ [c] and i ∈ [n]), and that their Hessian matrices are bounded, that is, there exists a G > 0 such that for all w ∈ Rd, i ∈ [n] and j ∈ [c], ‖Mi,j(w)‖ = ‖Jw (∇whj(w; i))‖ ≤ G, (11) where Jw denotes the Jacobian1. Remark 1 (Relation to second-order methods). Although our analysis requires an assumption on the Hessian matrices of h(w; i), our algorithms do not use any second order information or try to approximate this information. Our theoretical analysis focused on the approximation of the classifier and the gradient information, therefore is not related to the second order type algorithms. It is currently unclear how to apply second order methods into our problem, however, this is an interesting research question to expand the scope of this work. 1For a continuously differentiable function g(w) : Rd → Rc we define the Jacobian Jw(g(w)) as the matrix (∂ga(w)/∂wb)a∈[c],b∈[d]. Assumption 2 allows us to apply a Taylor approximation of each function hj(·; i) with which we prove the following Lemma that bounds the error in equation (9): Lemma 1. Suppose that Assumption 2 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (12) where H (t) i = Jw(h(w; i))|w=w(t) ∈ R c×d (13) is defined as the Jacobian matrix of h(w; i) at w(t) and entries (t)i,j , j ∈ [c], of vector (t) i satisfy | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G. (14) In order to approximate (7) combined with (9), that is, to make sure the right hand sides of (7) and (9) are close to one another, we consider the optimization problem (10): v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2. The optimal value of problem (10) is equal to 0 if there exists a vector v(t)∗ satisfying η(t)H (t) i v (t) ∗ = α (t) i ∇zφi(h(w(t); i)) for every i ∈ [n]. Since the solution v (t) ∗ is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n · c constraints and d variables. In the overparameterized setting where dimension d is sufficiently large (d n · c) and there are no identical data, there exists almost surely a vector v(t)∗ that interpolates all the training set, see the Appendix for details. Let us note that an approximation of v(t)∗ serves as the search direction for Framework 1. For this reason, the solution v(t)∗ of problem (10) plays a similar role as a gradient in the search direction of (stochastic) gradient descent method. It is standard to assume a bounded gradient in the machine learning literature (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016). Motivated by these facts, we assume the following Assumption 3, which implies the existence of a near-optimal bounded solution of (10): Assumption 3. We consider an over-parameterized setting where dimension d is sufficiently large enough to interpolate all the data and the tolerance ε. We assume that there exists a bound V > 0 such that for ε > 0 and 0 ≤ t < T as in Framework 1, there exists a vector v̂(t)∗ε with ‖v̂(t)∗ε ‖2 ≤ V so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(t)i and ∇zφi(h(w(t); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. This setting is similar to previous works, e.g. Allen-Zhu et al. (2019). 5 NEW ALGORITHMS AND CONVERGENCE RESULTS 5.1 APPROXIMATING THE SOLUTION USING REGULARIZER Since problem (10) is convex and quadratic, we consider the following regularized problem: min v∈Rd { Ψ(v) = 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v‖2 } , (15) for some small ε > 0 and t ≥ 0. It is widely known that problem (15) is strongly convex, and has a unique minimizer v(t)∗ reg. The global minimizer satisfies∇vΨ(v(t)∗ reg) = 0. We have ∇vΨ(v) = 1 n n∑ i=1 [η(t)H (t) i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i))] + ε2 · v = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I ) v − ( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . Therefore, v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . (16) If ε2 is small enough, then v(t)∗ reg is a close approximation of the solution v (t) ∗ for problem (10). Our first algorithm updates Framework 1 based on this approximation. Algorithm 1 Solve for the exact solution of the regularized problem Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Update the search direction v(t) as the solution v(t)∗ reg of problem in (15): v(t) = v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) Update w(t+1) = w(t) − η(t)v(t) end for The following Lemma shows the relation between the regularized solution v(t)∗ reg and the optimal solution of the original convex problem v̂(t)∗ε . Lemma 2. For given ε > 0, suppose that Assumption 3 holds for bound V > 0. Then, for iteration 0 ≤ t < T , the optimal solution v(t)∗ reg of problem (15) satisfies ‖v(t)∗ reg‖2 ≤ 2 + V and 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 ≤ (1 + V 2 )ε2. (17) Based on Lemma 2, we guarantee the global convergence of Algorithm 1 and prove our first theorem. Since it is currently expensive to solve for the exact solution of problem (15), our algorithm serves as a theoretical method to obtain the global convergence for the finite-sum minimization. Theorem 1. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 13 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 3α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(3ε+ 2) 8α(1− 3α) [ c(4 + (V + 2)GD2)2 + 8 + 4V ] · ε. (18) We note that β is a constant for the purpose of choosing the number of iterations T . The analysis can be simplified by choosing β = 1 with T = 1ε . Notice that the common convergence criteria for finding a stationary point for non-convex problems is 1T ∑T t=1 ||∇F (wt)||2 ≤ O(ε). This criteria has been widely used in the existing literature for non-convex optimization problems. Our convergence criteria 1T ∑T t=1[F (wt) − F∗] ≤ O(ε) is slightly different, in order to find a global solution for non-convex problems. Our proof for Theorem 1 is novel and insightful. It is originally motivated by the Gradient Descent update (7) and the convexity of the loss functions φi. For this reason it may not be a surprise that Algorithm 1 can find an ε-global solution after O ( 1 ε ) iterations. However, computing the exact solution in every iteration might be extremely challenging, especially when the number of samples n is large. Therefore, we present a different approach to this problem in the following section. 5.2 APPROXIMATION USING GRADIENT DESCENT In this section, we use Gradient Descent (GD) algorithm to solve the strongly convex problem (15). It is well-known that if ψ(x) − µ2 ‖x‖ 2 is convex for ∀x ∈ Rc, then ψ(x) is µ-strongly convex (see e.g. Nesterov (2004)). Hence Ψ(·) is ε2-strongly convex. For each iteration t, we use GD to find a search direction v(t) which is sufficiently close to the optimal solution v(t)∗ reg in that ‖v(t) − v(t)∗ reg‖ ≤ ε. (19) Our Algorithm 2 is described as follows. Algorithm 2 Solve the regularized problem using Gradient Descent Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Use Gradient Descent algorithm to solve Problem (15) and find a solution v(t) that satisfies ‖v(t) − v(t)∗ reg‖ ≤ ε Update w(t+1) = w(t) − η(t)v(t) end for Since Algorithm 2 can only approximate a solution within some ε-preciseness, we need a supplemental assumption for the analysis of our next Theorem 2: Assumption 4. Let H(t)i be the Jacobian matrix defined in Lemma 1. We assume that there exists some constant H > 0 such that, for i ∈ [n], ε > 0, and 0 ≤ t < T as in Algorithm 2, ‖H(t)i ‖ ≤ H√ ε . (20) Assumption 4 requires a mild condition on the bounded Jacobian of h(w; i), and the upper bound may depend on ε. This flexibility allows us to accommodate a good dependence of ε for the theoretical analysis. We are now ready to present our convergence theorem for Algorithm 2. Theorem 2. Let w(t) be generated by Algorithm 2 where v(t) satisfies (19). We execute Algorithm 2 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0, Assumption 3 holds for V > 0 and Assumption 4 holds for H > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 14 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 4α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(4ε+ 3) 2α(1− 4α) [ D2H2 + c(2 + (V + ε2 + 2)GD2)2 + 2 + V ] · ε. Theorem 2 implies Corollary 2 which provides the computational complexity for Algorithm 2. Note that for (Stochastic) Gradient Descent, we derive the complexity in terms of component gradient calculations for the finite-sum problem (1). As an alternative, for Algorithm 2 we compare the number of component gradients in problem (15). Such individual gradient has the following form: ∇vψi(v) = η(t)H(t)i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i)). In machine learning applications, the gradient of f(·; i) is calculated using automatic differentiation (i.e. backpropagation). Since f(·; i) is the composition of the network structure h(·; i) and loss function φi(·), this process also computes the Jacobian matrix H(t)i and the gradient∇zφi(h(w(t); i)) at a specific weight w(t). Since matrix-vector multiplication computation is not expensive, the cost for computing the component gradient of problem (15) is similar to problem (1). Corollary 2. Suppose that the conditions in Theorem 2 hold with η(t) = D √ ε̂√ N for some D > 0 and 0 < ε̂ ≤ N (that is, we set ε = ε̂/N ), where N = eβLφ ∑n i=1 ‖h(w (0);i)−h∗i ‖ 2 n(1−4α)αβ + 7eβLφ[D2H2+c(2+(V+3)GD2)2+2+V ] 2α(1−4α) . Then, the total complexity to guarantee min0≤t≤T−1[F (w(t))−F∗] ≤ 1T ∑T−1 t=0 [F (w (t))−F∗] ≤ ε̂ is O ( nN 3β ε̂3 (D 2H2 + (ε̂2/N)) log(Nε̂ ) ) . Remark 2. Corollary 2 shows that O (1/ε̂) outer loop iterations are needed in order to reach an ε̂-global solution, and it proves that each iteration needs the equivalent of O ( n ε̂2 log( 1 ε̂ ) ) gradient computations for computing an approximate solution. In total, Algorithm 2 has total complexity O ( n ε̂3 log( 1 ε̂ ) ) for finding an ε̂-global solution. For a comparison, Stochastic Gradient Descent uses a total of O( 1ε2 ) gradient computations to find a stationary point satisfying E[‖∇F (ŵ)‖2] ≤ ε for non-convex problems (Ghadimi & Lan, 2013). Gradient Descent has a better complexity in terms of ε, i.e. O(nε ) such that ‖∇F (ŵ)‖ 2 ≤ ε (Nesterov, 2004). However, both methods may not be able to reach a global solution of (1). In order to guarantee global convergence for nonconvex settings, one may resort to use Polyak-Lojasiewicz (PL) inequality (Karimi et al., 2016; Gower et al., 2021). This assumption is widely known to be strong, which implies that every stationary point is also a global minimizer. 6 FURTHER DISCUSSION AND CONCLUSIONS This paper presents an alternative composite formulation for solving the finite-sum optimization problem. Our formulation allows a new way of exploiting the structure of machine learning problems and the convexity of squared loss and softmax cross entropy loss, and leads to a novel algorithmic framework that guarantees global convergence (when the outer loss functions are convex and Lipschitz-smooth). Our analysis is general and can be applied to various different learning architectures, in particular, our analysis and assumptions match practical neural networks; in recent years, there has been a great interest in the structure of deep learning architectures for over-parameterized settings (Arora et al., 2018; Allen-Zhu et al., 2019; Nguyen & Mondelli, 2020). Algorithm 2 demonstrates a gradient method to solve the regularized problem, however, other methods can be applied to our framework (e.g. conjugate gradient descent). Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. Possible research directions include more practical algorithm designs based on our Framework 1, and different related methods to solve the regularized problem and approximate the solution. This potentially leads to a new class of efficient algorithms for machine learning problems. This paper presents a new perspective to the research community. ETHICS STATEMENT This paper does not contain ethics concerns. APPENDIX A TABLE OF NOTATIONS Notation Meaning F∗ Global minimization function of F in (1) F∗ = minw∈Rd F (w) h∗i h ∗ i = arg minz∈Rc φi(z), i ∈ [n] v (t) ∗ Solution of the convex problem in (10) minv∈Rd 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 v(t) An approximation of v(t)∗ which is used as the search direction in Framework 1 v̂ (t) ∗ε A vector that satisfies 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 ≤ ε2 for some ε > 0 and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. v (t) ∗ reg Solution of the strongly convex problem in (15) minv∈Rd { 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 + ε 2 2 ‖v‖ 2 } B USEFUL RESULTS The following lemmas provide key tools for our results. Lemma 3 (Squared loss). Let b ∈ Rc and define φ(z) = 12‖z − b‖ 2 for z ∈ Rc. Then φ is convex and Lφ-smooth with Lφ = 1. Lemma 4 (Softmax cross-entropy loss). Let index a ∈ [c] and define φ(z) = log [ c∑ k=1 exp(zk − za) ] = log [ c∑ k=1 exp(w>k z) ] , for z = (z1, . . . , zc)> ∈ Rc, wherewk = ek−ea with ei representing the i-th unit vector (containing 1 at the i-th position and 0 elsewhere). Then φ is convex and Lφ-smooth with Lφ = 1. The following lemma is a standard result in (Nesterov, 2004). Lemma 5 ((Nesterov, 2004)). If φ is Lφ-smooth and convex, then for ∀z ∈ Rc, ‖∇φ(z)‖2 ≤ 2Lφ(φ(z)− φ(z∗)), (21) where z∗ = arg minz φ(z). The following useful derivations could be used later in our theoretical analysis. Since φi is convex, by Definition 2 we have φi(h(w; i)) ≥ φi(h(w′; i)) + 〈 ∇zφi(z) ∣∣∣ z=h(w′;i) , h(w; i)− h(w′; i) 〉 . (22) If φi is convex and Lφ-smooth, then by Lemma 5∥∥∥∥∇zφi(z)∣∣∣ z=h(w;i) ∥∥∥∥2 ≤ 2Lφ [φi(h(w; i))− φi(h∗i )] , (23) where h∗i = arg minz∈Rc φi(z). We compute gradients of f(w; i) in term of φi(h(w; i)). • Gradient of softmax cross-entropy loss: ∇φi(z) ∣∣ z=h(w;i) = ( ∂φi(z) ∂z1 ∣∣∣ z=h(w;i) , . . . , ∂φi(z) ∂zc ∣∣∣ z=h(w;i) )> , where for j ∈ [c], ∂φi(z) ∂zj ∣∣∣ z=h(w;i) = exp ( [h(w;i)]j−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j 6= I(y(i)) − ∑ k 6=I(y(i)) exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j = I(y(i)) . (24) • Gradient of squared loss: ∇φi(z) ∣∣ z=h(w;i) = h(w; i)− y(i). (25) C ADDITIONAL DISCUSSION C.1 ABOUT ASSUMPTION 2 We make a formal assumption for the case h(·; i) is closely approximated by k(·; i). Assumption 5. We assume that for all i ∈ [n] there exists some approximations k(w; i) : Rd → Rc such that |kj(w; i)− hj(w; i)| ≤ ε, ∀w ∈ Rd, i ∈ [n] and j ∈ [c], (26) where k(·; i) are twice continuously differentiable (i.e. the second-order partial derivatives of all scalars kj(·; i) are continuous for all i ∈ [n]), and that their Hessian matrices are bounded: ‖Mi,j(w)‖ = ‖Jw (∇wkj(w; i))‖ ≤ G, ∀w ∈ Rd, i ∈ [n] and j ∈ [c]. (27) Assumption 5 allows us to prove the following Lemma that bound the error in equation (9): Lemma 6. Suppose that Assumption 5 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , we have: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (28) where H(t)i is defined to be the Jacobian matrix of the approximation k(w; i) at w (t): H (t) i := Jwk(w; i)|w=w(t) = ∂k1(w;i) ∂w1 . . . ∂k1(w;i)∂wd . . . . . . . . . ∂kc(w;i) ∂w1 . . . ∂kc(w;i)∂wd ∣∣∣∣∣ w=w(t) ∈ Rc×d. (29) Additionally we have, | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. (30) Note that these result recover the case when h(·; i) is itself smooth. Hence we analyze our algorithms using the result of Lemma 6, which generalizes the result from Lemma 1. C.2 ABOUT ASSUMPTION 3 In this section, we justify the existence of the search direction in Assumption 3 (almost surely). We argue that there exists a vector v̂(t)∗ε satisfying 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. It is sufficient to find a vector v satisfying that η(t)H (t) i v = α (t) i ∇zφi(h(w (t); i)) for every i ∈ [n]. Since the solution v is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n ·c constraints and d variables. LetA and b be the following stacked matrix and vector: A = H (t) 1 η (t) . . . H (t) n η(t) ∈ Rn·c×d, and b = α (t) 1 ∇zφ1(h(w(t); i)) . . . α (t) n ∇zφn(h(w(t); i)) ∈ Rn·c, then the problem reduce to finding the solution of the equation Av = b. In the over-parameterized setting where dimension d is sufficiently large (d n · c), then rank A = n · c almost surely and there exists almost surely a vector v that interpolates all the training set. To demonstrate this fact easier, we consider a simple neural network where the classifier h(w; i) is formulated as h(w; i) = W (2)>σ(W (1)>x(i)), where c = 1, W (1) ∈ Rm×l and W (2) ∈ Rl×1, w = vec({W (1),W (2)}) ∈ Rd is the vectorized weight where d = l(m+ 1) and σ is sigmoid activation function. H (t) i is defined to be the Jacobian matrix of h(w; i) at w (t): H (t) i := Jwh(w; i)|w=w(t) = [ ∂h(w;i) ∂w1 . . . ∂h(w;i)∂wd ] ∣∣∣∣∣ w=w(t) ∈ R1×d, then A = η(t) H (t) 1 . . . H (t) n = η(t) ∂h(w;1) ∂w1 . . . ∂h(w;1)∂wd . . . . . . . . . ∂h(w;n) ∂w1 . . . ∂h(w;n)∂wd ∈ Rn×d. We want to show that A has full rank, almost surely. We consider the over-parameterized setting where the last layer has at least n neuron (i.e. l = n and the simple version when c = 1. We argue that rank of matrix A is greater than or equal to rank of the submatrix B created by the weights of the last layer W (2) ∈ Rn: B = ∂h(w;1) ∂W (2) 1 . . . ∂h(w;1) ∂W (2) n . . . . . . . . . ∂h(w;n) ∂W (2) 1 . . . ∂h1(w;n) ∂W (2) n ∈ Rn×n. Note that h(·, i) is a linear function of the last weight layers (in this simple case W (2) ∈ Rn and σ(W (1)>x(i)) ∈ Rn), we can compute the partial derivatives as follows: ∂h(w; i) ∂W (2) = σ(W (1)>x(i)); i ∈ [n]. Hence B = σ(W (1)>x(1)) . . . σ(W (1)>x(n)) ∈ Rn×n. Assuming that there are no identical data, and σ is the sigmoid activation, the set of weights W (1) that make matrix B degenerate has measure zero. Hence B has full rank almost surely, and we have the same conclusion for A. Therefore we are able to prove the almost surely existence of a solution v of the linear equation Av = b for simple two layers network. Using the same argument, this result can be generalized for larger neural networks where the dimension d is sufficiently large (d nc). C.3 INITIALIZATION EXAMPLE Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(0)i and ∇zφi(h(w(0); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. In order to accommodate the choice of learning rate η(0) = D √ ε in our theorems, in this section we describe a network initialization that satisfies ‖H(0)i ‖ = Θ ( 1√ ε ) where the gradient norm ‖∇zφi(h(w(0); i))‖ is at most constant order with respect to ε. To simplify the problem, we only consider small-dimension data and networks without activation. About the target vector: We choose φi to be the softmax cross-entropy loss. By Lemma 7 (see below), we have that the gradient norm is upper bounded by a constant c, where c is the output dimension of the problem and is not dependent on ε. Note that when we stack all gradients for n data points, then the size of new vector is still not dependent on ε. About the network architecture: For simplicity, we consider the following classification problem where • The input data is in R2. There are only two data points {x(1), x(2)}. Input data is bounded and non-degenerate (we will clarify this property later). • The output data is (categorical) in R2: {y(1) = (1, 0), y(2) = (0, 1)}. We want to have an over-parameterized setting where the dimension of weight vector is at least nc = 4. We consider a simple network with two layers, no biases and no activation functions. Let the number of neurons in the hidden layer bem. The flow of this network is (in) R2 → Rm → R2 (out). First, we consider the case where m = 1. • The first layer has 2 parameters (w1, w2) and only 1 neuron that outputs z(i) = w1x (i) 1 + w2x (i) 2 (the subscript is for the coordinate of input data x (i)). • The second layer has 2 parameters (w3, w4). The final output is h(w, i) = [w3(w1x (i) 1 + w2x (i) 2 ), w4(w1x (i) 1 + w2x (i) 2 )] > ∈ R2, with w = [w1, w2, w3, w4]> ∈ R4. This network satisfies that the Hessian matrices of h(w; i) are bounded. Let Q and b be the following stacked matrix and vector: Q = [ H (0) 1 H (0) 2 ] ∈ R4×4, and b = [ ∇zφ1(h(w(0); 1)) ∇zφ2(h(w(0); 2)) ] ∈ R4, Then we have the following: Q = Q(w) = [ H (0) 1 H (0) 2 ] = ∇w[w3(w1x(1)1 + w2x (1) 2 )] ∇w[w4(w1x(1)1 + w2x (1) 2 )] ∇w[w3(w1x(2)1 + w2x (2) 2 )] ∇w[w4(w1x(2)1 + w2x (2) 2 )] = w3x (1) 1 w3x (1) 2 w1x (1) 1 + w2x (1) 2 0 w4x (1) 1 w4x (1) 2 0 w1x (1) 1 + w2x (1) 2 w3x (2) 1 w3x (2) 2 w1x (2) 1 + w2x (2) 2 0 w4x (2) 1 w4x (2) 2 0 w1x (2) 1 + w2x (2) 2 . The determinant of this matrix is a polynomial of the weight w and the input data. Under some mild non-degenerate condition of the input data, we can choose some base point w′ that made this matrix invertible (note that if this condition is not satisfied, we can rescale/add a very small noise to the data - which is the common procedure in machine learning). Hence the system Qu = b always has a solution. Now we consider the following two initializations: 1. We choose to initialize the starting point at w(0) = 1√ ε w′ and note that Q(w) is a linear function of w and Q(w′) is independent of ε. Then the norm of matrix Q(w(0)) has the same scale with 1√ ε . 2. Instead of choosing m = 1, we consider an over-parameterized network where m = 1ε (recall that m is the number of neurons in the hidden layer). The hidden layer in this case is: z = z (i) 1 = w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 . . . z (i) m = w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 . The output layer is:{ y (i) 1 = z (i) 1 w (2) 1,1 + · · ·+ z (i) m w (2) m,1 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,1 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,1 y (i) 2 = z (i) 1 w (2) 1,2 + · · ·+ z (i) m w (2) m,2 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,2 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,2 with w = [w(1)1,1, . . . , w (1) 1,m, w (1) 2,1, . . . , w (1) 2,m, w (2) 1,1, w (2) 1,2, . . . , w (2) m,1, w (2) m,2] > ∈ R4m. Hence, Q(w) = w (2) 1,1x (1) 1 . . . w (2) m,1x (1) 1 w (2) 1,1x (1) 2 . . . w (2) m,1x (1) 2 z (1) 1 0 . . . z (1) m 0 w (2) 1,2x (1) 1 . . . w (2) m,2x (1) 1 w (2) 1,2x (1) 2 . . . w (2) m,2x (1) 2 0 z (1) 1 . . . 0 z (1) m w (2) 1,1x (2) 1 . . . w (2) m,1x (2) 1 w (2) 1,1x (2) 2 . . . w (2) m,1x (2) 2 z (2) 1 0 . . . z (2) m 0 w (2) 1,2x (2) 1 . . . w (2) m,2x (2) 1 w (2) 1,2x (2) 2 . . . w (2) m,2x (2) 2 0 z (2) 1 . . . 0 z (2) m . Hence, the number of (possibly) non-zero elements in each row is 3m = 3ε . For matrix A of rank r, we have ‖A‖2 ≤ ‖A‖F ≤ √ r‖A‖2. Since the rank of Q(w) is at most 4 (nc = 4, independent of ε), we only need to find the Frobenius norm of Q(w). We have ‖Q(w)‖F = √√√√ 4∑ i=1 4m∑ j=1 |qij |2. Let qmin and qmax be the element with smallest/largest magnitude of Q(w). Suppose that x(i) 6= (0, 0) and choose w 6= 0 such that z 6= 0, qmin > 0 and independent of ε. Hence, √ 8√ ε |qmin| ≤ ‖Q(w)‖F ≤ √ 12√ ε |qmax|. Hence, ‖Q(w)‖ = Θ ( 1√ ε ) . Therefore this simple network initialization supports the dependence on ε for our Assumption 3. We note that a similar setting is found in (Allen-Zhu et al., 2019), where the authors initialize the weights using a random Gaussian distribution with a variance depending on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the number of neurons m depends inversely on the tolerance ε. Lemma 7. For softmax cross-entropy loss, and x = h(w; i) ∈ Rc, for ∀w ∈ Rd and i ∈ [n], we have ∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 ≤ c. (31) Proof. By (24), we have for i = 1, . . . , n, • For j 6= I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1. • For j = I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = (∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) )2 = ( ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1 Hence, for i = 1, . . . , n,∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 = c∑ j=1 ( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 ≤ c. This completes the proof. D PROOFS OF LEMMAS AND COROLLARY 1 PROOF OF LEMMA 1 Proof. Since h(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs hj(·; i) where j ∈ [c] and i ∈ [n]: hj(w (t+1); i) = hj(w (t) − η(t)v(t); i) = hj(w (t); i)− Jwhj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (32) where Mi,j(w̃(t)) is the Hessian matrices of hj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. This leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖ (11) ≤ 1 2 (η(t))2‖v(t)‖2G, j ∈ [c]. PROOF OF LEMMA 2 Proof. From Assumption 3, we know that there exists v̂(t)∗ε so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2, and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. Hence, 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v̂(t)∗ε ‖2 ≤ ε2 + ε2 2 V = (1 + V 2 )ε2. Since v(t)∗ reg is the optimal solution of the problem in (15) for 0 ≤ t < T , we have 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v(t)∗ reg‖2 ≤ (1 + V 2 )ε2. Therefore, we have (17) and ‖v(t)∗ reg‖2 ≤ 2 + V for 0 ≤ t < T . PROOF OF LEMMA 3 Proof. 1. We want to show that for any α ∈ [0, 1] φ(αz1 + (1− α)z2) ≤ αφ(z1) + (1− α)φ(z2), ∀z1, z2 ∈ Rc, (33) in order to have the convexity of φ with respect to z (see (Nesterov, 2004)). For any α ∈ [0, 1], we have for ∀z1, z2 ∈ Rc, α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − ‖α(z1 − b) + (1− α)(z2 − b)‖2 = α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − α2‖z1 − b‖2 − (1− α)2‖z2 − b‖2 − 2α(1− α)〈z1 − b, z2 − b〉 ≥ α(1− α)‖z1 − b‖2 + (1− α)α‖z2 − b‖2 − 2α(1− α)‖z1 − b‖ · ‖z2 − b‖ = α(1− α) (‖z1 − b‖ − ‖z2 − b‖)2 ≥ 0, where the first inequality follows according to Cauchy-Schwarz inequality 〈a, b〉 ≤ ‖a‖·‖b‖. Hence, 1 2 ‖αz1 + (1− α)z2 − b‖2 ≤ α 2 ‖z1 − b‖2 + (1− α) 2 ‖z2 − b‖2. Therefore, (33) implies the convexity of φ with respect to z. 2. We want to show that ∃Lφ > 0 such that ‖∇φ(z1)−∇φ(z2)‖ ≤ Lφ‖z1 − z2‖, ∀z1, z2 ∈ Rc. (34) Notice that∇φ(z) = z − b, then clearly ∀z1, z2 ∈ Rc, ‖∇φ(z1)−∇φ(z2)‖ = ‖z1 − z2‖. Therefore, (34) implies the Lφ-smoothness of φ with respect to z with Lφ = 1. PROOF OF LEMMA 4 Proof. 1. For ∀z1, z2 ∈ Rc and 1 ≤ k ≤ c, denote uk,1 = exp(w>k z1) and uk,2 = exp(w>k z2) and using Holder inequality c∑ k=1 ak · bk ≤ ( c∑ k=1 |ak|p ) 1 p ( c∑ k=1 |bk|q ) 1 q , where 1 p + 1 q = 1, (35) we have φ(αz1 + (1− α)z2) = log [ c∑ k=1 exp(w>k (αz1 + (1− α)z2)) ] = log [ c∑ k=1 uαk,1 · u (1−α) k,2 ] (35) ≤ log ( c∑ k=1 u α· 1α k,1 )α( c∑ k=1 u (1−α)· 1 (1−α) k,2 )1−α = α log [ c∑ k=1 exp(w>k z1) ] + (1− α) log [ c∑ k=1 exp(w>k z2) ] = αφ(z1) + (1− α)φ(z2), where the first inequality since log(x) is an increasing function for ∀x > 0 and exp(v) > 0 for ∀v ∈ R. Therefore, (33) implies the convexity of φ with respect to z. 2. Note that ‖∇2φ(z)‖ ≤ Lφ if and only if φ(z) is Lφ-smooth (see (Nesterov, 2004)). First, we compute gradient of φ(z): • For i 6= a: ∂φ(z) ∂zi = exp(zi − za)∑c k=1 exp(zk − za) . • For i = a: ∂φ(z) ∂zi = − ∑ k 6=a exp(zk − za)∑c k=1 exp(zk − za) = − ∑c k=1 exp(zk − za) + 1∑c k=1 exp(zk − za) = −1 + 1∑c k=1 exp(zk − za) = −1 + exp(zi − za)∑c k=1 exp(zk − za) . We then calculate ∂ 2φ(z) ∂zj∂zi = ∂∂zj ( ∂φ(z) ∂zi ) • For i = j: ∂2φ(z) ∂zj∂zi = exp(zi − za)[ ∑c k=1 exp(zk − za)]− exp(zi − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 = exp(zi − za)[ ∑c k=1 exp(zk − za)− exp(zi − za)] [ ∑c k=1 exp(zk − za)]2 . • For i 6= j: ∂2φ(z) ∂zj∂zi = − exp(zj − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 . Denote that yi = exp(zi − za) ≥ 0, i ∈ [c], we have: • For i = j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = ∣∣∣∣yi(∑ck=1 yk − yi)(∑ck=1 yk)2 ∣∣∣∣ . • For i 6= j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = |yiyj |(∑ck=1 yk)2 . Recall that for matrix A = (aij) ∈ Rc×c: ‖A‖2 ≤ ‖A‖2F = ∑c i=1 ∑c j=1 |aij |2. We have: c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1(∑ck=1 yk)4 y2i ( c∑ k=1 yk − yi)2 + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 y2i ( c∑ k=1 yk) 2 − 2y2i c∑ k=1 yk.yi + y 4 i + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 [ y2i ( c∑ k=1 yk) 2 − 2y3i c∑ k=1 yk + y 2 i c∑ k=1 y2k ] Therefore, ‖∇2φ(z)‖2 ≤ c∑ i=1 c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1 ( ∑c k=1 yk) 4 [ ( c∑ i=1 y2i )( c∑ k=1 yk) 2 − 2( c∑ i=1 y3i )( c∑ k=1 yk) + ( c∑ i=1 y2i )( c∑ k=1 y2k) ] ≤ ( ∑c i=1 y 2 i )( ∑c k=1 yk) 2 ( ∑c k=1 yk) 4 ≤ ( ∑c k=1 yk) 4 ( ∑c k=1 yk) 4 = 1, where the last inequality holds since ( c∑ i=1 y2i )( c∑ k=1 y2k) ≤ ( c∑ i=1 y3i )( c∑ k=1 yk)⇔ ( c∑ k=1 y2k) ≤ √√√√( c∑ i=1 y3i )( c∑ k=1 yk), which follows by the application of Holder inequality (35) with p = 2, q = 2, ak = y 3/2 k , and bk = y 1/2 k (Note that yk ≥ 0, k ∈ [c]). Hence, ‖∇2φ(z)‖ ≤ Lφ with Lφ = 1 which is equivalent to Lφ-smoothness of φ. PROOF OF LEMMA 6 Proof. Since k(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs kj(·; i) where j ∈ [c] and i ∈ [n]: kj(w (t+1); i) = kj(w (t) − η(t)v(t); i) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (36) where Mi,j(w̃(t)) is the Hessian matrices of kj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. Shifting this back to the original function hj(·; i) we have: hj(w (t+1); i) = kj(w (t+1); i) + (hj(w (t+1); i)− kj(w(t+1); i)) (36) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)), = hj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), which leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ + |hj(w(t+1); i)− kj(w(t+1); i)|+ |kj(w(t); i)− hj(w(t); i)| (26) ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣+ 2ε, ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖+ 2ε (11) ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. PROOF OF COROLLARY 1 Proof. The proof of this corollary follows directly by the applications of Lemmas 3 and 4. E TECHNICAL PROOFS FOR THEOREM 1 Lemma 8. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0, and v(t) = v (t) ∗ reg. Consider η(t) = D √ ε for some D > 0 and ε > 0. For i ∈ [n] and 0 ≤ t < T , we have ‖ (t)i ‖ 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. (37) Proof. From (14), for i ∈ [n], j ∈ [c], and for 0 ≤ t < T , by Lemma 1 and Lemma 6 we have | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε ≤ 1 2 (V + 2)GD2ε+ 2ε = 1 2 ε(4 + (V + 2)GD2), where the last inequality follows by the fact ‖v(t)‖2 = ‖v(t)∗ reg‖2 ≤ 2 + V of Lemma 2 and η(t) = D √ ε. Hence, ‖ (t)i ‖ 2 = c∑ j=1 | (t)i,j | 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. Lemma 9. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D
1. What is the focus of the paper regarding machine learning loss functions? 2. What are the strengths of the proposed approach, particularly in terms of smoothness and convergence results? 3. Do you have any concerns about the novelty of the formulation and the existing works in the optimization community? 4. How does the reviewer assess the complexity result and its limitations? 5. Are there any suggestions for improving the paper, such as providing numerical experiments?
Summary Of The Paper Review
Summary Of The Paper The submitted paper considers a composite formulation of machine learning loss functions, where both the inner and outer functions are required to be smooth. Two algorithms are proposed for the considered formulation, with corresponding convergence results. Review The submitted work presents some interesting result, such as using the composite structure for machine learning loss functions, and approximations approach to determine the descent direction. However, I have some concerns, The formulation is not new. This is a quite well studied formulation in the optimization community. For instance, the authors can check the work of Lewis and Wright 2016. For their result, non-smooth outer functions can be allowed. So the proposed approach is not novel, and the authors should discuss these existing work. The obtained complexity result is not strong, as it is in the sense of min 0 ≤ t ≤ T − 1 [ F ( w t ) − F ∗ ] ≤ 1 T ∑ t [ F ( w t ) − F ∗ ] = O ( ϵ ) . Given the good properties of the problem. It would be great if the authors can provide numerical experiments, to demonstrate the performance of the proposed work. The author addressed my concerns fairly, as a result i raised my score by 1 to 6.
ICLR
Title New Perspective on the Global Convergence of Finite-Sum Optimization Abstract Deep neural networks (DNNs) have shown great success in many machine learning tasks. Their training is challenging since the loss surface of the network architecture is generally non-convex, or even non-smooth. How and under what assumptions is guaranteed convergence to a global minimum possible? We propose a reformulation of the minimization problem allowing for a new recursive algorithmic framework. By using bounded style assumptions, we prove convergence to an ε-(global) minimum using Õ(1/ε) gradient computations. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. 1 INTRODUCTION In recent years, deep neural networks (DNNs) have shown a great success in many machine learning tasks. However, training these neural networks is challenging since the loss surface of network architecture is generally non-convex, or even non-smooth. Thus, there have been a long-standing question on how optimization algorithms may converge to a global minimum. Many previous work have investigated Gradient Descent algorithm and its stochastic version for over-parameterized setting (Arora et al., 2018; Soudry et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). Although these works have shown promising convergence results under certain assumptions, there is still a lack of new efficient methods that can guarantee global convergence for machine learning optimization. In this paper, we address this problem using a different perspective. Instead of analyzing the traditional finite-sum formulation, we adopt a new composite formulation that exactly depicts the structure of machine learning where a data set is used to learn a common classifier. Representation. Let { (x(i), y(i)) }n i=1 be a given training set with x(i) ∈ Rm, y(i) ∈ Rc, we investigate the following novel representation for deep learning tasks: min w∈Rd { F (w) = 1 n n∑ i=1 φi(h(w; i)) } , (1) where h(·; i) : Rd → Rc, i ∈ [n] = {1, . . . , n}, is the classifier for each input data x(i); and φi : Rc → R, i ∈ [n], is the loss function corresponding to each output data y(i). Our composite formulation (1) is a special case of the finite-sum problem minw∈Rd { F (w) = 1n ∑n i=1 f(w; i) } where each individual function f(·; i) is a composition of the loss function φi and the classifier h(·; i). This problem covers various important applications in machine learning, including logistic regression and neural networks. The most common approach for the finite-sum problem is using first-order methods such as (stochastic) gradient algorithms and making assumptions on the component functions f(·; i). As an alternative, we further investigate the structure of the loss function φi and narrow our assumption on the classifier h(·; i). For the purpose of this work, we first consider convex and Lipschitz-smooth loss functions while the classifiers can be non-convex. Using this representation, we propose a new framework followed by two algorithms that guarantee global convergence for the minimization problem. Algorithmic Framework. Representation (1) admits a new perspective. Our key insight is to (A) define z(t)i = h(w (t); i), where t is an iteration count of the outer loop in our algorithmic framework. Next (B), we want to approximate the change z(t+1)i − z (t) i in terms of a step size times the gradient ∇φi(z(t)i ) = (∂φi(z)/∂za)a∈[c] ∣∣ z=z (t) i , and (C) we approximate the change h(w(t+1); i)− h(w(t); i) in terms of the first order derivative H (t) i = (∂ha(w; i)/∂wb)a∈[c],b∈[d] ∣∣ w=w(t) . Finally, we combine (A), (B), and (C) to equate the approximations of z(t+1)i − z (t) i and h(w(t+1); i) − h(w(t); i). This leads to a recurrence on w(t) of the form w(t+1) = w(t) − η(t)v(t), where η(t) is a step size and which involves computing v(t) by solving a convex quadratic subproblem, see the details in Section 4. We explain two methods for approximating a solution for the derived subproblem. We show how to approximate the subproblem by transforming it into a strongly convex problem by adding a regularizer which can be solved in closed form. And we show how to use Gradient Descent (GD) on the subproblem to find an approximation v(t) of its solution. Convergence Analysis. Our analysis introduces non-standard bounded style assumptions. Intuitively, we assume that our convex and quadratic subproblem has a bounded solution. This allows us to prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)− F∗ ≤ ε, where F∗ is the global minimizer of F . Our analysis applies to a wide range of applications in machine learning: Our results hold for squared loss and softmax cross-entropy loss and applicable for a range of activation functions in DNN as we only assume that the h(·; i) are twice continuously differentiable and their Hessian matrices (second order derivatives) as well as their gradients (first order derivatives) are bounded. Contributions and Outline. Our contributions in this paper can be summarized as follows. • We propose a new representation (1) for analyzing the machine learning minimization problem. Our formulation utilizes the structure of machine learning tasks where a training data set of inputs and outputs is used to learn a common classifier. Related work in Section 2 shows how (1) is different from the classical finite-sum problem. • Based on the new representation we propose a novel algorithm framework. The algorithmic framework approximates a solution to a subproblem for which we show two distinct approaches. • For general DNNs and based on bounded style assumptions, we prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)−F∗ ≤ ε, where F∗ is the global minimizer of F . We emphasize that our focus is on developing a new theoretical foundation and that a translation to a practical implementation with empirical results is for future work. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 describes our setting and deep learning representation. Section 4 explains our key insight and derives our Framework 1. Section 5 presents our algorithms and their global convergence. All technical proofs are deferred to the Appendix. 2 RELATED WORK Formulation for Machine Learning Problems. The finite-sum problem is one of the most important and fundamental problems in machine learning. Analyzing this model is the most popular approach in the machine learning literature and it has been studied intensively throughout the years (Bottou et al., 2018; Reddi et al., 2016; Duchi et al., 2011b). Our new formulation (1) is a special case of the finite-sum problem, however, it is much more complicated than the previous model since it involves the data index i both inside the classifiers h(·; i) and the loss functions φi. For a comparison, previous works only consider a common loss function l(ŷ, y) for the predicted value ŷ and output data y (Zou et al., 2018; Soudry et al., 2018). Our modified version of loss function φi is a natural setting for machine learning. We note that when h(w; i) is the output produced by a model, our goal is to match this output with the corresponding target y(i). For that reason, the loss function for each output has a dependence on the output data y(i), and is denoted by φi. This fact reflects the natural setting of machine learning where the outputs are designed to fit different targets, and the optimization process depends on both outer function φi and inner functions h(·; i). This complication may potentially bring a challenge to theoretical analysis. However, with separate loss functions, we believe this model will help to exploit better the structure of machine learning problems and gain more insights on the neural network architecture. Other related composite optimization models are also investigated thoroughly in (Lewis & Wright, 2016; Zhang & Xiao, 2019; Tran-Dinh et al., 2020). Our model is different from these works as it does not have a common function wrapping outside the finite-sum term, as in (Lewis & Wright, 2016). Note that a broad class of variance reduction algorithms (e.g. SAG (Le Roux et al., 2012), SAGA (Defazio et al., 2014), SVRG (Johnson & Zhang, 2013), SARAH (Nguyen et al., 2017)) is designed specifically for the finite-sum formulation and is known to have certain benefits over Gradient Descent. In addition, the multilevel composite problem considered in (Zhang & Xiao, 2021) also covers empirical risk minimization problem. However our formulation does not match their work since our inner function h(w; i) is not an independent expectation over some data distribution, but a specific function that depends on the current data. Global Convergence for Neural Networks. A recent popular line of research is studying the dynamics of optimization methods on some specific neural network architectures. There are some early works that show the global convergence of Gradient Descent (GD) for simple linear network and two-layer network (Brutzkus et al., 2018; Soudry et al., 2018; Arora et al., 2019; Du et al., 2019b). Some further works extend these results to deep learning architectures (Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). These theoretical guarantees are generally proved for the case when the last output layer is fixed, which is not standard in practice. A recent work (Nguyen & Mondelli, 2020) prove the global convergence for GD when all layers are trained with some initial conditions. However, these results are for neural networks without bias neurons and it is unclear how these analyses can be extended to handle the bias terms of deep networks with different activations. Our novel framework and algorithms do not exclude learning bias layers as in (Nguyen & Mondelli, 2020). Using a different algorithm, Brutzkus et al. (2018) investigate Stochastic Gradient Descent (SGD) for two-layer networks in a restricted linearly separable data setting. This line of research continues with the works from Allen-Zhu et al. (2019); Zou et al. (2018) and later with Zou & Gu (2019). They justify the global convergence of SGD for deep neural networks for some probability depending on the number of input data and the initialization process. Over-Paramaterized Settings and other Assumptions for Machine Learning. Most of the modern learning architectures are over-parameterized, which means that the number of parameters are very large and often far more than the number of input data. Some recent works prove the global convergence of Gradient Descent when the number of neurons are extensively large, e.g. (Zou & Gu, 2019) requires Ω(n8) neurons for every hidden layer, and (Nguyen & Mondelli, 2020) improves this number to Ω(n3). If the initial point satisfies some special conditions, then they can show a better dependence of Ω(n). In Allen-Zhu et al. (2019), the authors initialize the weights using a random Gaussian distribution where the variance depends on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the dimension depends inversely on the tolerance . We will discuss how these over-paramaterized settings might be a necessary condition to develop our theory. Other standard assumptions for machine learning include the bounded gradient assumption (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016; Tran et al., 2021). It is also common to assume all the iterations of an algorithm stays in a bounded domain (Duchi et al., 2011a; Levy et al., 2018; Gürbüzbalaban et al., 2019; Reddi et al., 2018; Vaswani et al., 2021). Since we are analyzing a new composite formulation, it is understandable that our assumptions may also not be standard. However, we believe that there is a strong connection between our assumptions and the traditional setting of machine learning. We will discuss this point more clearly in Section 4. 3 BACKGROUND In this section, we discuss our formulation and notations in detail. Although this paper focuses on deep neural networks, our framework and theoretical analysis are general and applicable for other learning architectures. Deep Learning Representation. Let {(x(i), y(i))}ni=1 be a training data set where x(i) ∈ Rm is a training input and y(i) ∈ Rc is a training output. We consider a fully-connected neural network with L layers, where the l-th layer, l ∈ {0, 1, . . . , L}, has nl neurons. We represent layer 0-th and L-th layer as input and output layers, respectively, that is, n0 = d and nL = c. For l ∈ {1, . . . , L}, let W (l) ∈ Rnl−1×nl and b(l) ∈ Rnl , where {(W (l), b(l))Ll=1} represent the parameters of the neural network. A classifier h(w; i) is formulated as h(w; i) = W (L)>σL−1(W (L−1)>σL−2(. . . σ1(W (1)>x(i) + b(1)) . . . ) + b(L−1)) + b(L), wherew = vec({W (1), b(1), . . . ,W (L), b(L)}) ∈ Rd is the vectorized weight and {σl}L−1l=1 are some activation functions. The most common choices for machine learning are ReLU, sigmoid, hyperbolic tangent and softplus. For j ∈ [c], hj(·; i) : Rd → R denotes the component function of the output h(·; i), for each data i ∈ [n] respectively. Moreover, we define h∗i = arg minz∈Rc φi(z), i ∈ [n]. Loss Functions. The well-known loss functions in neural networks for solving classification and regression problems are softmax cross-entropy loss and square loss, respectively: (Softmax) Cross-Entropy Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = −y(i)> log(softmax(h(w; i))). (2) Squared Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = 1 2 ‖h(w; i)− y(i)‖2. (3) We provide some basic definitions in optimization theory to support our theory. Definition 1 (L-smooth). Function φ : Rc → R is Lφ-smooth if there exists a constant Lφ > 0 such that, ∀x1, x2 ∈ Rc, ‖∇φ(x1)−∇φ(x2)‖ ≤ Lφ‖x1 − x2‖. (4) Definition 2 (Convex). Function φ : Rc → R is convex if ∀x1, x2 ∈ Rc, φ(x1)− φ(x2) ≥ 〈∇φ(x2), x1 − x2〉. (5) The following corollary shows the properties of softmax cross-entropy loss (2) and squared loss (3). Corollary 1. For softmax cross-entropy loss (2) and squared loss (3), there exist functions h(·; i) : Rd → Rc and φi : Rc → R such that, for i ∈ [n], φi(z) is convex and Lφ-smooth with Lφ = 1, and f(w; i) = φi(h(w; i)) = φi(z) ∣∣ z=h(w;i) . (6) 4 NEW ALGORITHM FRAMEWORK 4.1 KEY INSIGHT We assume f(w; i) = φi(h(w; i)) with φi convex and Lφ-smooth. Our goal is to utilize the convexity of the outer function φi. In order to simplify notation, we write ∇zφi(h(w(t); i)) instead of ∇zφi(z) ∣∣ z=h(w(t);i) and denote z(t)i = h(w (t); i). Starting from the current weight w(t), we would like to find the next point w(t+1) that satisfies the following approximation for all i ∈ [n]: h(w(t+1); i) = z (t+1) i ≈ z (t) i − α (t) i ∇zφi(z (t) i ) = h(w (t); i)− α(t)i ∇zφi(h(w (t); i)). (7) We can see that this approximation is a “noisy” version of a gradient descent update for every function φi, simultaneously for all i ∈ [n]. In order to do this, we use the following update w(t+1) = w(t) − η(t)v(t), (8) where η(t) > 0 is a learning rate and v(t) is a search direction that helps us approximate equation (7). If the update term η(t)v(t) is small enough, and if h(·; i) has some nice smooth properties, then from basic calculus we have the following approximation: h(w(t+1); i) = h(w(t) − η(t)v(t); i) ≈ h(w(t); i)−H(t)i ( η(t)v(t) ) , (9) where H(t)i is a matrix in Rc×d with first-order derivatives. Motivated by approximations (7) and (9), we consider the following optimization problem: v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖H(t)i ( η(t)v ) − α(t)i ∇zφi(h(w (t); i))‖2. (10) Hence, by solving for the solution v(t)∗ of problem (10) we are able to find a search direction for the key approximation (7). This yields our new algorithmic Framework 1, see below. Framework 1 New Algorithm Framework Initialization: Choose an initial point w(0) ∈ Rd; for t = 0, 1, · · · , T − 1 do Solve for an approximation v(t) of the solution v(t)∗ of the problem in (10) v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 Update w(t+1) = w(t) − η(t)v(t) end for 4.2 TECHNICAL ASSUMPTIONS Assumption 1. The loss function φi is convex and Lφ-smooth for i ∈ [n]. Moreover, we assume that it is lower bounded, i.e. infz∈Rc φi(z) > −∞ for i ∈ [n]. We have shown the convexity and smoothness of squared loss and softmax cross-entropy loss in Section 3. The bounded property of φi is required in any algorithm for the well-definedness of (1). Now, in order to use the Taylor series approximation, we need the following assumption on the neural network architecture h: Assumption 2. We assume that h(·; i) is twice continuously differentiable for all i ∈ [n] (i.e. the second-order partial derivatives of all scalars hj(·; i) are continuous for all j ∈ [c] and i ∈ [n]), and that their Hessian matrices are bounded, that is, there exists a G > 0 such that for all w ∈ Rd, i ∈ [n] and j ∈ [c], ‖Mi,j(w)‖ = ‖Jw (∇whj(w; i))‖ ≤ G, (11) where Jw denotes the Jacobian1. Remark 1 (Relation to second-order methods). Although our analysis requires an assumption on the Hessian matrices of h(w; i), our algorithms do not use any second order information or try to approximate this information. Our theoretical analysis focused on the approximation of the classifier and the gradient information, therefore is not related to the second order type algorithms. It is currently unclear how to apply second order methods into our problem, however, this is an interesting research question to expand the scope of this work. 1For a continuously differentiable function g(w) : Rd → Rc we define the Jacobian Jw(g(w)) as the matrix (∂ga(w)/∂wb)a∈[c],b∈[d]. Assumption 2 allows us to apply a Taylor approximation of each function hj(·; i) with which we prove the following Lemma that bounds the error in equation (9): Lemma 1. Suppose that Assumption 2 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (12) where H (t) i = Jw(h(w; i))|w=w(t) ∈ R c×d (13) is defined as the Jacobian matrix of h(w; i) at w(t) and entries (t)i,j , j ∈ [c], of vector (t) i satisfy | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G. (14) In order to approximate (7) combined with (9), that is, to make sure the right hand sides of (7) and (9) are close to one another, we consider the optimization problem (10): v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2. The optimal value of problem (10) is equal to 0 if there exists a vector v(t)∗ satisfying η(t)H (t) i v (t) ∗ = α (t) i ∇zφi(h(w(t); i)) for every i ∈ [n]. Since the solution v (t) ∗ is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n · c constraints and d variables. In the overparameterized setting where dimension d is sufficiently large (d n · c) and there are no identical data, there exists almost surely a vector v(t)∗ that interpolates all the training set, see the Appendix for details. Let us note that an approximation of v(t)∗ serves as the search direction for Framework 1. For this reason, the solution v(t)∗ of problem (10) plays a similar role as a gradient in the search direction of (stochastic) gradient descent method. It is standard to assume a bounded gradient in the machine learning literature (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016). Motivated by these facts, we assume the following Assumption 3, which implies the existence of a near-optimal bounded solution of (10): Assumption 3. We consider an over-parameterized setting where dimension d is sufficiently large enough to interpolate all the data and the tolerance ε. We assume that there exists a bound V > 0 such that for ε > 0 and 0 ≤ t < T as in Framework 1, there exists a vector v̂(t)∗ε with ‖v̂(t)∗ε ‖2 ≤ V so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(t)i and ∇zφi(h(w(t); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. This setting is similar to previous works, e.g. Allen-Zhu et al. (2019). 5 NEW ALGORITHMS AND CONVERGENCE RESULTS 5.1 APPROXIMATING THE SOLUTION USING REGULARIZER Since problem (10) is convex and quadratic, we consider the following regularized problem: min v∈Rd { Ψ(v) = 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v‖2 } , (15) for some small ε > 0 and t ≥ 0. It is widely known that problem (15) is strongly convex, and has a unique minimizer v(t)∗ reg. The global minimizer satisfies∇vΨ(v(t)∗ reg) = 0. We have ∇vΨ(v) = 1 n n∑ i=1 [η(t)H (t) i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i))] + ε2 · v = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I ) v − ( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . Therefore, v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . (16) If ε2 is small enough, then v(t)∗ reg is a close approximation of the solution v (t) ∗ for problem (10). Our first algorithm updates Framework 1 based on this approximation. Algorithm 1 Solve for the exact solution of the regularized problem Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Update the search direction v(t) as the solution v(t)∗ reg of problem in (15): v(t) = v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) Update w(t+1) = w(t) − η(t)v(t) end for The following Lemma shows the relation between the regularized solution v(t)∗ reg and the optimal solution of the original convex problem v̂(t)∗ε . Lemma 2. For given ε > 0, suppose that Assumption 3 holds for bound V > 0. Then, for iteration 0 ≤ t < T , the optimal solution v(t)∗ reg of problem (15) satisfies ‖v(t)∗ reg‖2 ≤ 2 + V and 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 ≤ (1 + V 2 )ε2. (17) Based on Lemma 2, we guarantee the global convergence of Algorithm 1 and prove our first theorem. Since it is currently expensive to solve for the exact solution of problem (15), our algorithm serves as a theoretical method to obtain the global convergence for the finite-sum minimization. Theorem 1. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 13 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 3α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(3ε+ 2) 8α(1− 3α) [ c(4 + (V + 2)GD2)2 + 8 + 4V ] · ε. (18) We note that β is a constant for the purpose of choosing the number of iterations T . The analysis can be simplified by choosing β = 1 with T = 1ε . Notice that the common convergence criteria for finding a stationary point for non-convex problems is 1T ∑T t=1 ||∇F (wt)||2 ≤ O(ε). This criteria has been widely used in the existing literature for non-convex optimization problems. Our convergence criteria 1T ∑T t=1[F (wt) − F∗] ≤ O(ε) is slightly different, in order to find a global solution for non-convex problems. Our proof for Theorem 1 is novel and insightful. It is originally motivated by the Gradient Descent update (7) and the convexity of the loss functions φi. For this reason it may not be a surprise that Algorithm 1 can find an ε-global solution after O ( 1 ε ) iterations. However, computing the exact solution in every iteration might be extremely challenging, especially when the number of samples n is large. Therefore, we present a different approach to this problem in the following section. 5.2 APPROXIMATION USING GRADIENT DESCENT In this section, we use Gradient Descent (GD) algorithm to solve the strongly convex problem (15). It is well-known that if ψ(x) − µ2 ‖x‖ 2 is convex for ∀x ∈ Rc, then ψ(x) is µ-strongly convex (see e.g. Nesterov (2004)). Hence Ψ(·) is ε2-strongly convex. For each iteration t, we use GD to find a search direction v(t) which is sufficiently close to the optimal solution v(t)∗ reg in that ‖v(t) − v(t)∗ reg‖ ≤ ε. (19) Our Algorithm 2 is described as follows. Algorithm 2 Solve the regularized problem using Gradient Descent Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Use Gradient Descent algorithm to solve Problem (15) and find a solution v(t) that satisfies ‖v(t) − v(t)∗ reg‖ ≤ ε Update w(t+1) = w(t) − η(t)v(t) end for Since Algorithm 2 can only approximate a solution within some ε-preciseness, we need a supplemental assumption for the analysis of our next Theorem 2: Assumption 4. Let H(t)i be the Jacobian matrix defined in Lemma 1. We assume that there exists some constant H > 0 such that, for i ∈ [n], ε > 0, and 0 ≤ t < T as in Algorithm 2, ‖H(t)i ‖ ≤ H√ ε . (20) Assumption 4 requires a mild condition on the bounded Jacobian of h(w; i), and the upper bound may depend on ε. This flexibility allows us to accommodate a good dependence of ε for the theoretical analysis. We are now ready to present our convergence theorem for Algorithm 2. Theorem 2. Let w(t) be generated by Algorithm 2 where v(t) satisfies (19). We execute Algorithm 2 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0, Assumption 3 holds for V > 0 and Assumption 4 holds for H > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 14 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 4α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(4ε+ 3) 2α(1− 4α) [ D2H2 + c(2 + (V + ε2 + 2)GD2)2 + 2 + V ] · ε. Theorem 2 implies Corollary 2 which provides the computational complexity for Algorithm 2. Note that for (Stochastic) Gradient Descent, we derive the complexity in terms of component gradient calculations for the finite-sum problem (1). As an alternative, for Algorithm 2 we compare the number of component gradients in problem (15). Such individual gradient has the following form: ∇vψi(v) = η(t)H(t)i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i)). In machine learning applications, the gradient of f(·; i) is calculated using automatic differentiation (i.e. backpropagation). Since f(·; i) is the composition of the network structure h(·; i) and loss function φi(·), this process also computes the Jacobian matrix H(t)i and the gradient∇zφi(h(w(t); i)) at a specific weight w(t). Since matrix-vector multiplication computation is not expensive, the cost for computing the component gradient of problem (15) is similar to problem (1). Corollary 2. Suppose that the conditions in Theorem 2 hold with η(t) = D √ ε̂√ N for some D > 0 and 0 < ε̂ ≤ N (that is, we set ε = ε̂/N ), where N = eβLφ ∑n i=1 ‖h(w (0);i)−h∗i ‖ 2 n(1−4α)αβ + 7eβLφ[D2H2+c(2+(V+3)GD2)2+2+V ] 2α(1−4α) . Then, the total complexity to guarantee min0≤t≤T−1[F (w(t))−F∗] ≤ 1T ∑T−1 t=0 [F (w (t))−F∗] ≤ ε̂ is O ( nN 3β ε̂3 (D 2H2 + (ε̂2/N)) log(Nε̂ ) ) . Remark 2. Corollary 2 shows that O (1/ε̂) outer loop iterations are needed in order to reach an ε̂-global solution, and it proves that each iteration needs the equivalent of O ( n ε̂2 log( 1 ε̂ ) ) gradient computations for computing an approximate solution. In total, Algorithm 2 has total complexity O ( n ε̂3 log( 1 ε̂ ) ) for finding an ε̂-global solution. For a comparison, Stochastic Gradient Descent uses a total of O( 1ε2 ) gradient computations to find a stationary point satisfying E[‖∇F (ŵ)‖2] ≤ ε for non-convex problems (Ghadimi & Lan, 2013). Gradient Descent has a better complexity in terms of ε, i.e. O(nε ) such that ‖∇F (ŵ)‖ 2 ≤ ε (Nesterov, 2004). However, both methods may not be able to reach a global solution of (1). In order to guarantee global convergence for nonconvex settings, one may resort to use Polyak-Lojasiewicz (PL) inequality (Karimi et al., 2016; Gower et al., 2021). This assumption is widely known to be strong, which implies that every stationary point is also a global minimizer. 6 FURTHER DISCUSSION AND CONCLUSIONS This paper presents an alternative composite formulation for solving the finite-sum optimization problem. Our formulation allows a new way of exploiting the structure of machine learning problems and the convexity of squared loss and softmax cross entropy loss, and leads to a novel algorithmic framework that guarantees global convergence (when the outer loss functions are convex and Lipschitz-smooth). Our analysis is general and can be applied to various different learning architectures, in particular, our analysis and assumptions match practical neural networks; in recent years, there has been a great interest in the structure of deep learning architectures for over-parameterized settings (Arora et al., 2018; Allen-Zhu et al., 2019; Nguyen & Mondelli, 2020). Algorithm 2 demonstrates a gradient method to solve the regularized problem, however, other methods can be applied to our framework (e.g. conjugate gradient descent). Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. Possible research directions include more practical algorithm designs based on our Framework 1, and different related methods to solve the regularized problem and approximate the solution. This potentially leads to a new class of efficient algorithms for machine learning problems. This paper presents a new perspective to the research community. ETHICS STATEMENT This paper does not contain ethics concerns. APPENDIX A TABLE OF NOTATIONS Notation Meaning F∗ Global minimization function of F in (1) F∗ = minw∈Rd F (w) h∗i h ∗ i = arg minz∈Rc φi(z), i ∈ [n] v (t) ∗ Solution of the convex problem in (10) minv∈Rd 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 v(t) An approximation of v(t)∗ which is used as the search direction in Framework 1 v̂ (t) ∗ε A vector that satisfies 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 ≤ ε2 for some ε > 0 and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. v (t) ∗ reg Solution of the strongly convex problem in (15) minv∈Rd { 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 + ε 2 2 ‖v‖ 2 } B USEFUL RESULTS The following lemmas provide key tools for our results. Lemma 3 (Squared loss). Let b ∈ Rc and define φ(z) = 12‖z − b‖ 2 for z ∈ Rc. Then φ is convex and Lφ-smooth with Lφ = 1. Lemma 4 (Softmax cross-entropy loss). Let index a ∈ [c] and define φ(z) = log [ c∑ k=1 exp(zk − za) ] = log [ c∑ k=1 exp(w>k z) ] , for z = (z1, . . . , zc)> ∈ Rc, wherewk = ek−ea with ei representing the i-th unit vector (containing 1 at the i-th position and 0 elsewhere). Then φ is convex and Lφ-smooth with Lφ = 1. The following lemma is a standard result in (Nesterov, 2004). Lemma 5 ((Nesterov, 2004)). If φ is Lφ-smooth and convex, then for ∀z ∈ Rc, ‖∇φ(z)‖2 ≤ 2Lφ(φ(z)− φ(z∗)), (21) where z∗ = arg minz φ(z). The following useful derivations could be used later in our theoretical analysis. Since φi is convex, by Definition 2 we have φi(h(w; i)) ≥ φi(h(w′; i)) + 〈 ∇zφi(z) ∣∣∣ z=h(w′;i) , h(w; i)− h(w′; i) 〉 . (22) If φi is convex and Lφ-smooth, then by Lemma 5∥∥∥∥∇zφi(z)∣∣∣ z=h(w;i) ∥∥∥∥2 ≤ 2Lφ [φi(h(w; i))− φi(h∗i )] , (23) where h∗i = arg minz∈Rc φi(z). We compute gradients of f(w; i) in term of φi(h(w; i)). • Gradient of softmax cross-entropy loss: ∇φi(z) ∣∣ z=h(w;i) = ( ∂φi(z) ∂z1 ∣∣∣ z=h(w;i) , . . . , ∂φi(z) ∂zc ∣∣∣ z=h(w;i) )> , where for j ∈ [c], ∂φi(z) ∂zj ∣∣∣ z=h(w;i) = exp ( [h(w;i)]j−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j 6= I(y(i)) − ∑ k 6=I(y(i)) exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j = I(y(i)) . (24) • Gradient of squared loss: ∇φi(z) ∣∣ z=h(w;i) = h(w; i)− y(i). (25) C ADDITIONAL DISCUSSION C.1 ABOUT ASSUMPTION 2 We make a formal assumption for the case h(·; i) is closely approximated by k(·; i). Assumption 5. We assume that for all i ∈ [n] there exists some approximations k(w; i) : Rd → Rc such that |kj(w; i)− hj(w; i)| ≤ ε, ∀w ∈ Rd, i ∈ [n] and j ∈ [c], (26) where k(·; i) are twice continuously differentiable (i.e. the second-order partial derivatives of all scalars kj(·; i) are continuous for all i ∈ [n]), and that their Hessian matrices are bounded: ‖Mi,j(w)‖ = ‖Jw (∇wkj(w; i))‖ ≤ G, ∀w ∈ Rd, i ∈ [n] and j ∈ [c]. (27) Assumption 5 allows us to prove the following Lemma that bound the error in equation (9): Lemma 6. Suppose that Assumption 5 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , we have: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (28) where H(t)i is defined to be the Jacobian matrix of the approximation k(w; i) at w (t): H (t) i := Jwk(w; i)|w=w(t) = ∂k1(w;i) ∂w1 . . . ∂k1(w;i)∂wd . . . . . . . . . ∂kc(w;i) ∂w1 . . . ∂kc(w;i)∂wd ∣∣∣∣∣ w=w(t) ∈ Rc×d. (29) Additionally we have, | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. (30) Note that these result recover the case when h(·; i) is itself smooth. Hence we analyze our algorithms using the result of Lemma 6, which generalizes the result from Lemma 1. C.2 ABOUT ASSUMPTION 3 In this section, we justify the existence of the search direction in Assumption 3 (almost surely). We argue that there exists a vector v̂(t)∗ε satisfying 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. It is sufficient to find a vector v satisfying that η(t)H (t) i v = α (t) i ∇zφi(h(w (t); i)) for every i ∈ [n]. Since the solution v is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n ·c constraints and d variables. LetA and b be the following stacked matrix and vector: A = H (t) 1 η (t) . . . H (t) n η(t) ∈ Rn·c×d, and b = α (t) 1 ∇zφ1(h(w(t); i)) . . . α (t) n ∇zφn(h(w(t); i)) ∈ Rn·c, then the problem reduce to finding the solution of the equation Av = b. In the over-parameterized setting where dimension d is sufficiently large (d n · c), then rank A = n · c almost surely and there exists almost surely a vector v that interpolates all the training set. To demonstrate this fact easier, we consider a simple neural network where the classifier h(w; i) is formulated as h(w; i) = W (2)>σ(W (1)>x(i)), where c = 1, W (1) ∈ Rm×l and W (2) ∈ Rl×1, w = vec({W (1),W (2)}) ∈ Rd is the vectorized weight where d = l(m+ 1) and σ is sigmoid activation function. H (t) i is defined to be the Jacobian matrix of h(w; i) at w (t): H (t) i := Jwh(w; i)|w=w(t) = [ ∂h(w;i) ∂w1 . . . ∂h(w;i)∂wd ] ∣∣∣∣∣ w=w(t) ∈ R1×d, then A = η(t) H (t) 1 . . . H (t) n = η(t) ∂h(w;1) ∂w1 . . . ∂h(w;1)∂wd . . . . . . . . . ∂h(w;n) ∂w1 . . . ∂h(w;n)∂wd ∈ Rn×d. We want to show that A has full rank, almost surely. We consider the over-parameterized setting where the last layer has at least n neuron (i.e. l = n and the simple version when c = 1. We argue that rank of matrix A is greater than or equal to rank of the submatrix B created by the weights of the last layer W (2) ∈ Rn: B = ∂h(w;1) ∂W (2) 1 . . . ∂h(w;1) ∂W (2) n . . . . . . . . . ∂h(w;n) ∂W (2) 1 . . . ∂h1(w;n) ∂W (2) n ∈ Rn×n. Note that h(·, i) is a linear function of the last weight layers (in this simple case W (2) ∈ Rn and σ(W (1)>x(i)) ∈ Rn), we can compute the partial derivatives as follows: ∂h(w; i) ∂W (2) = σ(W (1)>x(i)); i ∈ [n]. Hence B = σ(W (1)>x(1)) . . . σ(W (1)>x(n)) ∈ Rn×n. Assuming that there are no identical data, and σ is the sigmoid activation, the set of weights W (1) that make matrix B degenerate has measure zero. Hence B has full rank almost surely, and we have the same conclusion for A. Therefore we are able to prove the almost surely existence of a solution v of the linear equation Av = b for simple two layers network. Using the same argument, this result can be generalized for larger neural networks where the dimension d is sufficiently large (d nc). C.3 INITIALIZATION EXAMPLE Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(0)i and ∇zφi(h(w(0); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. In order to accommodate the choice of learning rate η(0) = D √ ε in our theorems, in this section we describe a network initialization that satisfies ‖H(0)i ‖ = Θ ( 1√ ε ) where the gradient norm ‖∇zφi(h(w(0); i))‖ is at most constant order with respect to ε. To simplify the problem, we only consider small-dimension data and networks without activation. About the target vector: We choose φi to be the softmax cross-entropy loss. By Lemma 7 (see below), we have that the gradient norm is upper bounded by a constant c, where c is the output dimension of the problem and is not dependent on ε. Note that when we stack all gradients for n data points, then the size of new vector is still not dependent on ε. About the network architecture: For simplicity, we consider the following classification problem where • The input data is in R2. There are only two data points {x(1), x(2)}. Input data is bounded and non-degenerate (we will clarify this property later). • The output data is (categorical) in R2: {y(1) = (1, 0), y(2) = (0, 1)}. We want to have an over-parameterized setting where the dimension of weight vector is at least nc = 4. We consider a simple network with two layers, no biases and no activation functions. Let the number of neurons in the hidden layer bem. The flow of this network is (in) R2 → Rm → R2 (out). First, we consider the case where m = 1. • The first layer has 2 parameters (w1, w2) and only 1 neuron that outputs z(i) = w1x (i) 1 + w2x (i) 2 (the subscript is for the coordinate of input data x (i)). • The second layer has 2 parameters (w3, w4). The final output is h(w, i) = [w3(w1x (i) 1 + w2x (i) 2 ), w4(w1x (i) 1 + w2x (i) 2 )] > ∈ R2, with w = [w1, w2, w3, w4]> ∈ R4. This network satisfies that the Hessian matrices of h(w; i) are bounded. Let Q and b be the following stacked matrix and vector: Q = [ H (0) 1 H (0) 2 ] ∈ R4×4, and b = [ ∇zφ1(h(w(0); 1)) ∇zφ2(h(w(0); 2)) ] ∈ R4, Then we have the following: Q = Q(w) = [ H (0) 1 H (0) 2 ] = ∇w[w3(w1x(1)1 + w2x (1) 2 )] ∇w[w4(w1x(1)1 + w2x (1) 2 )] ∇w[w3(w1x(2)1 + w2x (2) 2 )] ∇w[w4(w1x(2)1 + w2x (2) 2 )] = w3x (1) 1 w3x (1) 2 w1x (1) 1 + w2x (1) 2 0 w4x (1) 1 w4x (1) 2 0 w1x (1) 1 + w2x (1) 2 w3x (2) 1 w3x (2) 2 w1x (2) 1 + w2x (2) 2 0 w4x (2) 1 w4x (2) 2 0 w1x (2) 1 + w2x (2) 2 . The determinant of this matrix is a polynomial of the weight w and the input data. Under some mild non-degenerate condition of the input data, we can choose some base point w′ that made this matrix invertible (note that if this condition is not satisfied, we can rescale/add a very small noise to the data - which is the common procedure in machine learning). Hence the system Qu = b always has a solution. Now we consider the following two initializations: 1. We choose to initialize the starting point at w(0) = 1√ ε w′ and note that Q(w) is a linear function of w and Q(w′) is independent of ε. Then the norm of matrix Q(w(0)) has the same scale with 1√ ε . 2. Instead of choosing m = 1, we consider an over-parameterized network where m = 1ε (recall that m is the number of neurons in the hidden layer). The hidden layer in this case is: z = z (i) 1 = w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 . . . z (i) m = w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 . The output layer is:{ y (i) 1 = z (i) 1 w (2) 1,1 + · · ·+ z (i) m w (2) m,1 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,1 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,1 y (i) 2 = z (i) 1 w (2) 1,2 + · · ·+ z (i) m w (2) m,2 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,2 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,2 with w = [w(1)1,1, . . . , w (1) 1,m, w (1) 2,1, . . . , w (1) 2,m, w (2) 1,1, w (2) 1,2, . . . , w (2) m,1, w (2) m,2] > ∈ R4m. Hence, Q(w) = w (2) 1,1x (1) 1 . . . w (2) m,1x (1) 1 w (2) 1,1x (1) 2 . . . w (2) m,1x (1) 2 z (1) 1 0 . . . z (1) m 0 w (2) 1,2x (1) 1 . . . w (2) m,2x (1) 1 w (2) 1,2x (1) 2 . . . w (2) m,2x (1) 2 0 z (1) 1 . . . 0 z (1) m w (2) 1,1x (2) 1 . . . w (2) m,1x (2) 1 w (2) 1,1x (2) 2 . . . w (2) m,1x (2) 2 z (2) 1 0 . . . z (2) m 0 w (2) 1,2x (2) 1 . . . w (2) m,2x (2) 1 w (2) 1,2x (2) 2 . . . w (2) m,2x (2) 2 0 z (2) 1 . . . 0 z (2) m . Hence, the number of (possibly) non-zero elements in each row is 3m = 3ε . For matrix A of rank r, we have ‖A‖2 ≤ ‖A‖F ≤ √ r‖A‖2. Since the rank of Q(w) is at most 4 (nc = 4, independent of ε), we only need to find the Frobenius norm of Q(w). We have ‖Q(w)‖F = √√√√ 4∑ i=1 4m∑ j=1 |qij |2. Let qmin and qmax be the element with smallest/largest magnitude of Q(w). Suppose that x(i) 6= (0, 0) and choose w 6= 0 such that z 6= 0, qmin > 0 and independent of ε. Hence, √ 8√ ε |qmin| ≤ ‖Q(w)‖F ≤ √ 12√ ε |qmax|. Hence, ‖Q(w)‖ = Θ ( 1√ ε ) . Therefore this simple network initialization supports the dependence on ε for our Assumption 3. We note that a similar setting is found in (Allen-Zhu et al., 2019), where the authors initialize the weights using a random Gaussian distribution with a variance depending on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the number of neurons m depends inversely on the tolerance ε. Lemma 7. For softmax cross-entropy loss, and x = h(w; i) ∈ Rc, for ∀w ∈ Rd and i ∈ [n], we have ∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 ≤ c. (31) Proof. By (24), we have for i = 1, . . . , n, • For j 6= I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1. • For j = I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = (∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) )2 = ( ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1 Hence, for i = 1, . . . , n,∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 = c∑ j=1 ( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 ≤ c. This completes the proof. D PROOFS OF LEMMAS AND COROLLARY 1 PROOF OF LEMMA 1 Proof. Since h(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs hj(·; i) where j ∈ [c] and i ∈ [n]: hj(w (t+1); i) = hj(w (t) − η(t)v(t); i) = hj(w (t); i)− Jwhj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (32) where Mi,j(w̃(t)) is the Hessian matrices of hj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. This leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖ (11) ≤ 1 2 (η(t))2‖v(t)‖2G, j ∈ [c]. PROOF OF LEMMA 2 Proof. From Assumption 3, we know that there exists v̂(t)∗ε so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2, and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. Hence, 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v̂(t)∗ε ‖2 ≤ ε2 + ε2 2 V = (1 + V 2 )ε2. Since v(t)∗ reg is the optimal solution of the problem in (15) for 0 ≤ t < T , we have 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v(t)∗ reg‖2 ≤ (1 + V 2 )ε2. Therefore, we have (17) and ‖v(t)∗ reg‖2 ≤ 2 + V for 0 ≤ t < T . PROOF OF LEMMA 3 Proof. 1. We want to show that for any α ∈ [0, 1] φ(αz1 + (1− α)z2) ≤ αφ(z1) + (1− α)φ(z2), ∀z1, z2 ∈ Rc, (33) in order to have the convexity of φ with respect to z (see (Nesterov, 2004)). For any α ∈ [0, 1], we have for ∀z1, z2 ∈ Rc, α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − ‖α(z1 − b) + (1− α)(z2 − b)‖2 = α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − α2‖z1 − b‖2 − (1− α)2‖z2 − b‖2 − 2α(1− α)〈z1 − b, z2 − b〉 ≥ α(1− α)‖z1 − b‖2 + (1− α)α‖z2 − b‖2 − 2α(1− α)‖z1 − b‖ · ‖z2 − b‖ = α(1− α) (‖z1 − b‖ − ‖z2 − b‖)2 ≥ 0, where the first inequality follows according to Cauchy-Schwarz inequality 〈a, b〉 ≤ ‖a‖·‖b‖. Hence, 1 2 ‖αz1 + (1− α)z2 − b‖2 ≤ α 2 ‖z1 − b‖2 + (1− α) 2 ‖z2 − b‖2. Therefore, (33) implies the convexity of φ with respect to z. 2. We want to show that ∃Lφ > 0 such that ‖∇φ(z1)−∇φ(z2)‖ ≤ Lφ‖z1 − z2‖, ∀z1, z2 ∈ Rc. (34) Notice that∇φ(z) = z − b, then clearly ∀z1, z2 ∈ Rc, ‖∇φ(z1)−∇φ(z2)‖ = ‖z1 − z2‖. Therefore, (34) implies the Lφ-smoothness of φ with respect to z with Lφ = 1. PROOF OF LEMMA 4 Proof. 1. For ∀z1, z2 ∈ Rc and 1 ≤ k ≤ c, denote uk,1 = exp(w>k z1) and uk,2 = exp(w>k z2) and using Holder inequality c∑ k=1 ak · bk ≤ ( c∑ k=1 |ak|p ) 1 p ( c∑ k=1 |bk|q ) 1 q , where 1 p + 1 q = 1, (35) we have φ(αz1 + (1− α)z2) = log [ c∑ k=1 exp(w>k (αz1 + (1− α)z2)) ] = log [ c∑ k=1 uαk,1 · u (1−α) k,2 ] (35) ≤ log ( c∑ k=1 u α· 1α k,1 )α( c∑ k=1 u (1−α)· 1 (1−α) k,2 )1−α = α log [ c∑ k=1 exp(w>k z1) ] + (1− α) log [ c∑ k=1 exp(w>k z2) ] = αφ(z1) + (1− α)φ(z2), where the first inequality since log(x) is an increasing function for ∀x > 0 and exp(v) > 0 for ∀v ∈ R. Therefore, (33) implies the convexity of φ with respect to z. 2. Note that ‖∇2φ(z)‖ ≤ Lφ if and only if φ(z) is Lφ-smooth (see (Nesterov, 2004)). First, we compute gradient of φ(z): • For i 6= a: ∂φ(z) ∂zi = exp(zi − za)∑c k=1 exp(zk − za) . • For i = a: ∂φ(z) ∂zi = − ∑ k 6=a exp(zk − za)∑c k=1 exp(zk − za) = − ∑c k=1 exp(zk − za) + 1∑c k=1 exp(zk − za) = −1 + 1∑c k=1 exp(zk − za) = −1 + exp(zi − za)∑c k=1 exp(zk − za) . We then calculate ∂ 2φ(z) ∂zj∂zi = ∂∂zj ( ∂φ(z) ∂zi ) • For i = j: ∂2φ(z) ∂zj∂zi = exp(zi − za)[ ∑c k=1 exp(zk − za)]− exp(zi − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 = exp(zi − za)[ ∑c k=1 exp(zk − za)− exp(zi − za)] [ ∑c k=1 exp(zk − za)]2 . • For i 6= j: ∂2φ(z) ∂zj∂zi = − exp(zj − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 . Denote that yi = exp(zi − za) ≥ 0, i ∈ [c], we have: • For i = j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = ∣∣∣∣yi(∑ck=1 yk − yi)(∑ck=1 yk)2 ∣∣∣∣ . • For i 6= j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = |yiyj |(∑ck=1 yk)2 . Recall that for matrix A = (aij) ∈ Rc×c: ‖A‖2 ≤ ‖A‖2F = ∑c i=1 ∑c j=1 |aij |2. We have: c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1(∑ck=1 yk)4 y2i ( c∑ k=1 yk − yi)2 + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 y2i ( c∑ k=1 yk) 2 − 2y2i c∑ k=1 yk.yi + y 4 i + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 [ y2i ( c∑ k=1 yk) 2 − 2y3i c∑ k=1 yk + y 2 i c∑ k=1 y2k ] Therefore, ‖∇2φ(z)‖2 ≤ c∑ i=1 c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1 ( ∑c k=1 yk) 4 [ ( c∑ i=1 y2i )( c∑ k=1 yk) 2 − 2( c∑ i=1 y3i )( c∑ k=1 yk) + ( c∑ i=1 y2i )( c∑ k=1 y2k) ] ≤ ( ∑c i=1 y 2 i )( ∑c k=1 yk) 2 ( ∑c k=1 yk) 4 ≤ ( ∑c k=1 yk) 4 ( ∑c k=1 yk) 4 = 1, where the last inequality holds since ( c∑ i=1 y2i )( c∑ k=1 y2k) ≤ ( c∑ i=1 y3i )( c∑ k=1 yk)⇔ ( c∑ k=1 y2k) ≤ √√√√( c∑ i=1 y3i )( c∑ k=1 yk), which follows by the application of Holder inequality (35) with p = 2, q = 2, ak = y 3/2 k , and bk = y 1/2 k (Note that yk ≥ 0, k ∈ [c]). Hence, ‖∇2φ(z)‖ ≤ Lφ with Lφ = 1 which is equivalent to Lφ-smoothness of φ. PROOF OF LEMMA 6 Proof. Since k(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs kj(·; i) where j ∈ [c] and i ∈ [n]: kj(w (t+1); i) = kj(w (t) − η(t)v(t); i) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (36) where Mi,j(w̃(t)) is the Hessian matrices of kj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. Shifting this back to the original function hj(·; i) we have: hj(w (t+1); i) = kj(w (t+1); i) + (hj(w (t+1); i)− kj(w(t+1); i)) (36) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)), = hj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), which leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ + |hj(w(t+1); i)− kj(w(t+1); i)|+ |kj(w(t); i)− hj(w(t); i)| (26) ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣+ 2ε, ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖+ 2ε (11) ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. PROOF OF COROLLARY 1 Proof. The proof of this corollary follows directly by the applications of Lemmas 3 and 4. E TECHNICAL PROOFS FOR THEOREM 1 Lemma 8. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0, and v(t) = v (t) ∗ reg. Consider η(t) = D √ ε for some D > 0 and ε > 0. For i ∈ [n] and 0 ≤ t < T , we have ‖ (t)i ‖ 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. (37) Proof. From (14), for i ∈ [n], j ∈ [c], and for 0 ≤ t < T , by Lemma 1 and Lemma 6 we have | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε ≤ 1 2 (V + 2)GD2ε+ 2ε = 1 2 ε(4 + (V + 2)GD2), where the last inequality follows by the fact ‖v(t)‖2 = ‖v(t)∗ reg‖2 ≤ 2 + V of Lemma 2 and η(t) = D √ ε. Hence, ‖ (t)i ‖ 2 = c∑ j=1 | (t)i,j | 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. Lemma 9. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D
1. What is the focus and contribution of the paper regarding gradient-based algorithms? 2. What are the strengths of the proposed approach, particularly in terms of its design and analysis? 3. What are the weaknesses of the paper, especially regarding its claims of novelty and the absence of experimental studies? 4. Do you have any concerns regarding the convergence rate of the algorithm, specifically when beta increases? 5. Why did the authors choose not to consider conjugate gradient descent to approximate the solution of the quadratic problem?
Summary Of The Paper Review
Summary Of The Paper The paper provides a new gradient-based algorithm. The algorithm is based on the observation that a loss function for a single sample can be written as composition of two functions (the logits and the actual loss function). It computes the direction by means of solving a quadratic MSE problem. They provide a convergent analysis of the algorithm (there is a version where the quadratic problem is solved explicitly through a closed form expression and a version where gradient descent is applied to solve the problem approximately). There are no computational experiments. The main contributions are in the algorithms themselves and the accompanying analyses. It's unclear how novel are the proof techniques since everything resembles second order algorithms. The authors also claim the actual reformulation to be a novel contribution however such a formulation is straightforward and used in many contexts (some of my lecture slides from years ago show the formulation used by authors as a possible formulation for the overall loss function). Despite of this, the design of the algorithm should get credit. Review Strengths: the design of the algorithms based on the particular formulation used; the underlying analysis Weaknesses: While this is a theory oriented paper, the theoretical portion is not strong enough to compensate lack of an experimental study. I am unconvinced about the novelty of the analyses. The authors should provide more convincing arguments that the entire material is unrelated to derivate-free second order algorithms and for example, BFGS. Regarding Theorem 1, something is strange with respect to \beta. If beta is large (goes to infinity), it increases the number of iterations, yet the right-hand-side in (18) goes to infinity. It should be the other way around. To approximately solve (15) or its original version, I wonder why the authors don't consider conjugate gradient descent. It works well for quadratic problems.
ICLR
Title New Perspective on the Global Convergence of Finite-Sum Optimization Abstract Deep neural networks (DNNs) have shown great success in many machine learning tasks. Their training is challenging since the loss surface of the network architecture is generally non-convex, or even non-smooth. How and under what assumptions is guaranteed convergence to a global minimum possible? We propose a reformulation of the minimization problem allowing for a new recursive algorithmic framework. By using bounded style assumptions, we prove convergence to an ε-(global) minimum using Õ(1/ε) gradient computations. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. 1 INTRODUCTION In recent years, deep neural networks (DNNs) have shown a great success in many machine learning tasks. However, training these neural networks is challenging since the loss surface of network architecture is generally non-convex, or even non-smooth. Thus, there have been a long-standing question on how optimization algorithms may converge to a global minimum. Many previous work have investigated Gradient Descent algorithm and its stochastic version for over-parameterized setting (Arora et al., 2018; Soudry et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). Although these works have shown promising convergence results under certain assumptions, there is still a lack of new efficient methods that can guarantee global convergence for machine learning optimization. In this paper, we address this problem using a different perspective. Instead of analyzing the traditional finite-sum formulation, we adopt a new composite formulation that exactly depicts the structure of machine learning where a data set is used to learn a common classifier. Representation. Let { (x(i), y(i)) }n i=1 be a given training set with x(i) ∈ Rm, y(i) ∈ Rc, we investigate the following novel representation for deep learning tasks: min w∈Rd { F (w) = 1 n n∑ i=1 φi(h(w; i)) } , (1) where h(·; i) : Rd → Rc, i ∈ [n] = {1, . . . , n}, is the classifier for each input data x(i); and φi : Rc → R, i ∈ [n], is the loss function corresponding to each output data y(i). Our composite formulation (1) is a special case of the finite-sum problem minw∈Rd { F (w) = 1n ∑n i=1 f(w; i) } where each individual function f(·; i) is a composition of the loss function φi and the classifier h(·; i). This problem covers various important applications in machine learning, including logistic regression and neural networks. The most common approach for the finite-sum problem is using first-order methods such as (stochastic) gradient algorithms and making assumptions on the component functions f(·; i). As an alternative, we further investigate the structure of the loss function φi and narrow our assumption on the classifier h(·; i). For the purpose of this work, we first consider convex and Lipschitz-smooth loss functions while the classifiers can be non-convex. Using this representation, we propose a new framework followed by two algorithms that guarantee global convergence for the minimization problem. Algorithmic Framework. Representation (1) admits a new perspective. Our key insight is to (A) define z(t)i = h(w (t); i), where t is an iteration count of the outer loop in our algorithmic framework. Next (B), we want to approximate the change z(t+1)i − z (t) i in terms of a step size times the gradient ∇φi(z(t)i ) = (∂φi(z)/∂za)a∈[c] ∣∣ z=z (t) i , and (C) we approximate the change h(w(t+1); i)− h(w(t); i) in terms of the first order derivative H (t) i = (∂ha(w; i)/∂wb)a∈[c],b∈[d] ∣∣ w=w(t) . Finally, we combine (A), (B), and (C) to equate the approximations of z(t+1)i − z (t) i and h(w(t+1); i) − h(w(t); i). This leads to a recurrence on w(t) of the form w(t+1) = w(t) − η(t)v(t), where η(t) is a step size and which involves computing v(t) by solving a convex quadratic subproblem, see the details in Section 4. We explain two methods for approximating a solution for the derived subproblem. We show how to approximate the subproblem by transforming it into a strongly convex problem by adding a regularizer which can be solved in closed form. And we show how to use Gradient Descent (GD) on the subproblem to find an approximation v(t) of its solution. Convergence Analysis. Our analysis introduces non-standard bounded style assumptions. Intuitively, we assume that our convex and quadratic subproblem has a bounded solution. This allows us to prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)− F∗ ≤ ε, where F∗ is the global minimizer of F . Our analysis applies to a wide range of applications in machine learning: Our results hold for squared loss and softmax cross-entropy loss and applicable for a range of activation functions in DNN as we only assume that the h(·; i) are twice continuously differentiable and their Hessian matrices (second order derivatives) as well as their gradients (first order derivatives) are bounded. Contributions and Outline. Our contributions in this paper can be summarized as follows. • We propose a new representation (1) for analyzing the machine learning minimization problem. Our formulation utilizes the structure of machine learning tasks where a training data set of inputs and outputs is used to learn a common classifier. Related work in Section 2 shows how (1) is different from the classical finite-sum problem. • Based on the new representation we propose a novel algorithm framework. The algorithmic framework approximates a solution to a subproblem for which we show two distinct approaches. • For general DNNs and based on bounded style assumptions, we prove a total complexity of Õ( 1ε3 ) to find an ε-(global) solution that satisfies F (ŵ)−F∗ ≤ ε, where F∗ is the global minimizer of F . We emphasize that our focus is on developing a new theoretical foundation and that a translation to a practical implementation with empirical results is for future work. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum. The rest of this paper is organized as follows. Section 2 discusses related work. Section 3 describes our setting and deep learning representation. Section 4 explains our key insight and derives our Framework 1. Section 5 presents our algorithms and their global convergence. All technical proofs are deferred to the Appendix. 2 RELATED WORK Formulation for Machine Learning Problems. The finite-sum problem is one of the most important and fundamental problems in machine learning. Analyzing this model is the most popular approach in the machine learning literature and it has been studied intensively throughout the years (Bottou et al., 2018; Reddi et al., 2016; Duchi et al., 2011b). Our new formulation (1) is a special case of the finite-sum problem, however, it is much more complicated than the previous model since it involves the data index i both inside the classifiers h(·; i) and the loss functions φi. For a comparison, previous works only consider a common loss function l(ŷ, y) for the predicted value ŷ and output data y (Zou et al., 2018; Soudry et al., 2018). Our modified version of loss function φi is a natural setting for machine learning. We note that when h(w; i) is the output produced by a model, our goal is to match this output with the corresponding target y(i). For that reason, the loss function for each output has a dependence on the output data y(i), and is denoted by φi. This fact reflects the natural setting of machine learning where the outputs are designed to fit different targets, and the optimization process depends on both outer function φi and inner functions h(·; i). This complication may potentially bring a challenge to theoretical analysis. However, with separate loss functions, we believe this model will help to exploit better the structure of machine learning problems and gain more insights on the neural network architecture. Other related composite optimization models are also investigated thoroughly in (Lewis & Wright, 2016; Zhang & Xiao, 2019; Tran-Dinh et al., 2020). Our model is different from these works as it does not have a common function wrapping outside the finite-sum term, as in (Lewis & Wright, 2016). Note that a broad class of variance reduction algorithms (e.g. SAG (Le Roux et al., 2012), SAGA (Defazio et al., 2014), SVRG (Johnson & Zhang, 2013), SARAH (Nguyen et al., 2017)) is designed specifically for the finite-sum formulation and is known to have certain benefits over Gradient Descent. In addition, the multilevel composite problem considered in (Zhang & Xiao, 2021) also covers empirical risk minimization problem. However our formulation does not match their work since our inner function h(w; i) is not an independent expectation over some data distribution, but a specific function that depends on the current data. Global Convergence for Neural Networks. A recent popular line of research is studying the dynamics of optimization methods on some specific neural network architectures. There are some early works that show the global convergence of Gradient Descent (GD) for simple linear network and two-layer network (Brutzkus et al., 2018; Soudry et al., 2018; Arora et al., 2019; Du et al., 2019b). Some further works extend these results to deep learning architectures (Allen-Zhu et al., 2019; Du et al., 2019a; Zou & Gu, 2019). These theoretical guarantees are generally proved for the case when the last output layer is fixed, which is not standard in practice. A recent work (Nguyen & Mondelli, 2020) prove the global convergence for GD when all layers are trained with some initial conditions. However, these results are for neural networks without bias neurons and it is unclear how these analyses can be extended to handle the bias terms of deep networks with different activations. Our novel framework and algorithms do not exclude learning bias layers as in (Nguyen & Mondelli, 2020). Using a different algorithm, Brutzkus et al. (2018) investigate Stochastic Gradient Descent (SGD) for two-layer networks in a restricted linearly separable data setting. This line of research continues with the works from Allen-Zhu et al. (2019); Zou et al. (2018) and later with Zou & Gu (2019). They justify the global convergence of SGD for deep neural networks for some probability depending on the number of input data and the initialization process. Over-Paramaterized Settings and other Assumptions for Machine Learning. Most of the modern learning architectures are over-parameterized, which means that the number of parameters are very large and often far more than the number of input data. Some recent works prove the global convergence of Gradient Descent when the number of neurons are extensively large, e.g. (Zou & Gu, 2019) requires Ω(n8) neurons for every hidden layer, and (Nguyen & Mondelli, 2020) improves this number to Ω(n3). If the initial point satisfies some special conditions, then they can show a better dependence of Ω(n). In Allen-Zhu et al. (2019), the authors initialize the weights using a random Gaussian distribution where the variance depends on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the dimension depends inversely on the tolerance . We will discuss how these over-paramaterized settings might be a necessary condition to develop our theory. Other standard assumptions for machine learning include the bounded gradient assumption (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016; Tran et al., 2021). It is also common to assume all the iterations of an algorithm stays in a bounded domain (Duchi et al., 2011a; Levy et al., 2018; Gürbüzbalaban et al., 2019; Reddi et al., 2018; Vaswani et al., 2021). Since we are analyzing a new composite formulation, it is understandable that our assumptions may also not be standard. However, we believe that there is a strong connection between our assumptions and the traditional setting of machine learning. We will discuss this point more clearly in Section 4. 3 BACKGROUND In this section, we discuss our formulation and notations in detail. Although this paper focuses on deep neural networks, our framework and theoretical analysis are general and applicable for other learning architectures. Deep Learning Representation. Let {(x(i), y(i))}ni=1 be a training data set where x(i) ∈ Rm is a training input and y(i) ∈ Rc is a training output. We consider a fully-connected neural network with L layers, where the l-th layer, l ∈ {0, 1, . . . , L}, has nl neurons. We represent layer 0-th and L-th layer as input and output layers, respectively, that is, n0 = d and nL = c. For l ∈ {1, . . . , L}, let W (l) ∈ Rnl−1×nl and b(l) ∈ Rnl , where {(W (l), b(l))Ll=1} represent the parameters of the neural network. A classifier h(w; i) is formulated as h(w; i) = W (L)>σL−1(W (L−1)>σL−2(. . . σ1(W (1)>x(i) + b(1)) . . . ) + b(L−1)) + b(L), wherew = vec({W (1), b(1), . . . ,W (L), b(L)}) ∈ Rd is the vectorized weight and {σl}L−1l=1 are some activation functions. The most common choices for machine learning are ReLU, sigmoid, hyperbolic tangent and softplus. For j ∈ [c], hj(·; i) : Rd → R denotes the component function of the output h(·; i), for each data i ∈ [n] respectively. Moreover, we define h∗i = arg minz∈Rc φi(z), i ∈ [n]. Loss Functions. The well-known loss functions in neural networks for solving classification and regression problems are softmax cross-entropy loss and square loss, respectively: (Softmax) Cross-Entropy Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = −y(i)> log(softmax(h(w; i))). (2) Squared Loss: F (w) = 1n ∑n i=1 f(w; i) with f(w; i) = 1 2 ‖h(w; i)− y(i)‖2. (3) We provide some basic definitions in optimization theory to support our theory. Definition 1 (L-smooth). Function φ : Rc → R is Lφ-smooth if there exists a constant Lφ > 0 such that, ∀x1, x2 ∈ Rc, ‖∇φ(x1)−∇φ(x2)‖ ≤ Lφ‖x1 − x2‖. (4) Definition 2 (Convex). Function φ : Rc → R is convex if ∀x1, x2 ∈ Rc, φ(x1)− φ(x2) ≥ 〈∇φ(x2), x1 − x2〉. (5) The following corollary shows the properties of softmax cross-entropy loss (2) and squared loss (3). Corollary 1. For softmax cross-entropy loss (2) and squared loss (3), there exist functions h(·; i) : Rd → Rc and φi : Rc → R such that, for i ∈ [n], φi(z) is convex and Lφ-smooth with Lφ = 1, and f(w; i) = φi(h(w; i)) = φi(z) ∣∣ z=h(w;i) . (6) 4 NEW ALGORITHM FRAMEWORK 4.1 KEY INSIGHT We assume f(w; i) = φi(h(w; i)) with φi convex and Lφ-smooth. Our goal is to utilize the convexity of the outer function φi. In order to simplify notation, we write ∇zφi(h(w(t); i)) instead of ∇zφi(z) ∣∣ z=h(w(t);i) and denote z(t)i = h(w (t); i). Starting from the current weight w(t), we would like to find the next point w(t+1) that satisfies the following approximation for all i ∈ [n]: h(w(t+1); i) = z (t+1) i ≈ z (t) i − α (t) i ∇zφi(z (t) i ) = h(w (t); i)− α(t)i ∇zφi(h(w (t); i)). (7) We can see that this approximation is a “noisy” version of a gradient descent update for every function φi, simultaneously for all i ∈ [n]. In order to do this, we use the following update w(t+1) = w(t) − η(t)v(t), (8) where η(t) > 0 is a learning rate and v(t) is a search direction that helps us approximate equation (7). If the update term η(t)v(t) is small enough, and if h(·; i) has some nice smooth properties, then from basic calculus we have the following approximation: h(w(t+1); i) = h(w(t) − η(t)v(t); i) ≈ h(w(t); i)−H(t)i ( η(t)v(t) ) , (9) where H(t)i is a matrix in Rc×d with first-order derivatives. Motivated by approximations (7) and (9), we consider the following optimization problem: v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖H(t)i ( η(t)v ) − α(t)i ∇zφi(h(w (t); i))‖2. (10) Hence, by solving for the solution v(t)∗ of problem (10) we are able to find a search direction for the key approximation (7). This yields our new algorithmic Framework 1, see below. Framework 1 New Algorithm Framework Initialization: Choose an initial point w(0) ∈ Rd; for t = 0, 1, · · · , T − 1 do Solve for an approximation v(t) of the solution v(t)∗ of the problem in (10) v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 Update w(t+1) = w(t) − η(t)v(t) end for 4.2 TECHNICAL ASSUMPTIONS Assumption 1. The loss function φi is convex and Lφ-smooth for i ∈ [n]. Moreover, we assume that it is lower bounded, i.e. infz∈Rc φi(z) > −∞ for i ∈ [n]. We have shown the convexity and smoothness of squared loss and softmax cross-entropy loss in Section 3. The bounded property of φi is required in any algorithm for the well-definedness of (1). Now, in order to use the Taylor series approximation, we need the following assumption on the neural network architecture h: Assumption 2. We assume that h(·; i) is twice continuously differentiable for all i ∈ [n] (i.e. the second-order partial derivatives of all scalars hj(·; i) are continuous for all j ∈ [c] and i ∈ [n]), and that their Hessian matrices are bounded, that is, there exists a G > 0 such that for all w ∈ Rd, i ∈ [n] and j ∈ [c], ‖Mi,j(w)‖ = ‖Jw (∇whj(w; i))‖ ≤ G, (11) where Jw denotes the Jacobian1. Remark 1 (Relation to second-order methods). Although our analysis requires an assumption on the Hessian matrices of h(w; i), our algorithms do not use any second order information or try to approximate this information. Our theoretical analysis focused on the approximation of the classifier and the gradient information, therefore is not related to the second order type algorithms. It is currently unclear how to apply second order methods into our problem, however, this is an interesting research question to expand the scope of this work. 1For a continuously differentiable function g(w) : Rd → Rc we define the Jacobian Jw(g(w)) as the matrix (∂ga(w)/∂wb)a∈[c],b∈[d]. Assumption 2 allows us to apply a Taylor approximation of each function hj(·; i) with which we prove the following Lemma that bounds the error in equation (9): Lemma 1. Suppose that Assumption 2 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (12) where H (t) i = Jw(h(w; i))|w=w(t) ∈ R c×d (13) is defined as the Jacobian matrix of h(w; i) at w(t) and entries (t)i,j , j ∈ [c], of vector (t) i satisfy | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G. (14) In order to approximate (7) combined with (9), that is, to make sure the right hand sides of (7) and (9) are close to one another, we consider the optimization problem (10): v (t) ∗ = arg min v∈Rd 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2. The optimal value of problem (10) is equal to 0 if there exists a vector v(t)∗ satisfying η(t)H (t) i v (t) ∗ = α (t) i ∇zφi(h(w(t); i)) for every i ∈ [n]. Since the solution v (t) ∗ is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n · c constraints and d variables. In the overparameterized setting where dimension d is sufficiently large (d n · c) and there are no identical data, there exists almost surely a vector v(t)∗ that interpolates all the training set, see the Appendix for details. Let us note that an approximation of v(t)∗ serves as the search direction for Framework 1. For this reason, the solution v(t)∗ of problem (10) plays a similar role as a gradient in the search direction of (stochastic) gradient descent method. It is standard to assume a bounded gradient in the machine learning literature (Nemirovski et al., 2009; Shalev-Shwartz et al., 2007; Reddi et al., 2016). Motivated by these facts, we assume the following Assumption 3, which implies the existence of a near-optimal bounded solution of (10): Assumption 3. We consider an over-parameterized setting where dimension d is sufficiently large enough to interpolate all the data and the tolerance ε. We assume that there exists a bound V > 0 such that for ε > 0 and 0 ≤ t < T as in Framework 1, there exists a vector v̂(t)∗ε with ‖v̂(t)∗ε ‖2 ≤ V so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(t)i and ∇zφi(h(w(t); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. This setting is similar to previous works, e.g. Allen-Zhu et al. (2019). 5 NEW ALGORITHMS AND CONVERGENCE RESULTS 5.1 APPROXIMATING THE SOLUTION USING REGULARIZER Since problem (10) is convex and quadratic, we consider the following regularized problem: min v∈Rd { Ψ(v) = 1 2 1 n n∑ i=1 ‖η(t)H(t)i v − α (t) i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v‖2 } , (15) for some small ε > 0 and t ≥ 0. It is widely known that problem (15) is strongly convex, and has a unique minimizer v(t)∗ reg. The global minimizer satisfies∇vΨ(v(t)∗ reg) = 0. We have ∇vΨ(v) = 1 n n∑ i=1 [η(t)H (t) i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i))] + ε2 · v = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I ) v − ( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . Therefore, v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) . (16) If ε2 is small enough, then v(t)∗ reg is a close approximation of the solution v (t) ∗ for problem (10). Our first algorithm updates Framework 1 based on this approximation. Algorithm 1 Solve for the exact solution of the regularized problem Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Update the search direction v(t) as the solution v(t)∗ reg of problem in (15): v(t) = v (t) ∗ reg = ( 1 n n∑ i=1 η(t)H (t) i >H (t) i η (t) + ε2I )−1( 1 n n∑ i=1 α (t) i η (t)H (t) i >∇zφi(h(w(t); i)) ) Update w(t+1) = w(t) − η(t)v(t) end for The following Lemma shows the relation between the regularized solution v(t)∗ reg and the optimal solution of the original convex problem v̂(t)∗ε . Lemma 2. For given ε > 0, suppose that Assumption 3 holds for bound V > 0. Then, for iteration 0 ≤ t < T , the optimal solution v(t)∗ reg of problem (15) satisfies ‖v(t)∗ reg‖2 ≤ 2 + V and 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 ≤ (1 + V 2 )ε2. (17) Based on Lemma 2, we guarantee the global convergence of Algorithm 1 and prove our first theorem. Since it is currently expensive to solve for the exact solution of problem (15), our algorithm serves as a theoretical method to obtain the global convergence for the finite-sum minimization. Theorem 1. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 13 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 3α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(3ε+ 2) 8α(1− 3α) [ c(4 + (V + 2)GD2)2 + 8 + 4V ] · ε. (18) We note that β is a constant for the purpose of choosing the number of iterations T . The analysis can be simplified by choosing β = 1 with T = 1ε . Notice that the common convergence criteria for finding a stationary point for non-convex problems is 1T ∑T t=1 ||∇F (wt)||2 ≤ O(ε). This criteria has been widely used in the existing literature for non-convex optimization problems. Our convergence criteria 1T ∑T t=1[F (wt) − F∗] ≤ O(ε) is slightly different, in order to find a global solution for non-convex problems. Our proof for Theorem 1 is novel and insightful. It is originally motivated by the Gradient Descent update (7) and the convexity of the loss functions φi. For this reason it may not be a surprise that Algorithm 1 can find an ε-global solution after O ( 1 ε ) iterations. However, computing the exact solution in every iteration might be extremely challenging, especially when the number of samples n is large. Therefore, we present a different approach to this problem in the following section. 5.2 APPROXIMATION USING GRADIENT DESCENT In this section, we use Gradient Descent (GD) algorithm to solve the strongly convex problem (15). It is well-known that if ψ(x) − µ2 ‖x‖ 2 is convex for ∀x ∈ Rc, then ψ(x) is µ-strongly convex (see e.g. Nesterov (2004)). Hence Ψ(·) is ε2-strongly convex. For each iteration t, we use GD to find a search direction v(t) which is sufficiently close to the optimal solution v(t)∗ reg in that ‖v(t) − v(t)∗ reg‖ ≤ ε. (19) Our Algorithm 2 is described as follows. Algorithm 2 Solve the regularized problem using Gradient Descent Initialization: Choose an initial point w(0) ∈ Rd, tolerance ε > 0; for t = 0, 1, · · · , T − 1 do Use Gradient Descent algorithm to solve Problem (15) and find a solution v(t) that satisfies ‖v(t) − v(t)∗ reg‖ ≤ ε Update w(t+1) = w(t) − η(t)v(t) end for Since Algorithm 2 can only approximate a solution within some ε-preciseness, we need a supplemental assumption for the analysis of our next Theorem 2: Assumption 4. Let H(t)i be the Jacobian matrix defined in Lemma 1. We assume that there exists some constant H > 0 such that, for i ∈ [n], ε > 0, and 0 ≤ t < T as in Algorithm 2, ‖H(t)i ‖ ≤ H√ ε . (20) Assumption 4 requires a mild condition on the bounded Jacobian of h(w; i), and the upper bound may depend on ε. This flexibility allows us to accommodate a good dependence of ε for the theoretical analysis. We are now ready to present our convergence theorem for Algorithm 2. Theorem 2. Let w(t) be generated by Algorithm 2 where v(t) satisfies (19). We execute Algorithm 2 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0, Assumption 3 holds for V > 0 and Assumption 4 holds for H > 0. We set the step size equal to η(t) = D √ ε for some D > 0 and choose a learning rate α (t) i = (1 + ε)α (t−1) i = (1 + ε) tα (0) i . Based on β, we define α (0) i = α eβLφ with α ∈ (0, 14 ). Let F∗ be the global minimizer of F , and h∗i = arg minz∈Rc φi(z), i ∈ [n]. Then 1 T T−1∑ t=0 [F (w(t))− F∗] ≤ eβLφ(1 + ε) 2(1− 4α)αβ · 1 n n∑ i=1 ‖h(w(0); i)− h∗i ‖2 · ε + eβLφ(4ε+ 3) 2α(1− 4α) [ D2H2 + c(2 + (V + ε2 + 2)GD2)2 + 2 + V ] · ε. Theorem 2 implies Corollary 2 which provides the computational complexity for Algorithm 2. Note that for (Stochastic) Gradient Descent, we derive the complexity in terms of component gradient calculations for the finite-sum problem (1). As an alternative, for Algorithm 2 we compare the number of component gradients in problem (15). Such individual gradient has the following form: ∇vψi(v) = η(t)H(t)i >H (t) i η (t)v − α(t)i η (t)H (t) i >∇zφi(h(w(t); i)). In machine learning applications, the gradient of f(·; i) is calculated using automatic differentiation (i.e. backpropagation). Since f(·; i) is the composition of the network structure h(·; i) and loss function φi(·), this process also computes the Jacobian matrix H(t)i and the gradient∇zφi(h(w(t); i)) at a specific weight w(t). Since matrix-vector multiplication computation is not expensive, the cost for computing the component gradient of problem (15) is similar to problem (1). Corollary 2. Suppose that the conditions in Theorem 2 hold with η(t) = D √ ε̂√ N for some D > 0 and 0 < ε̂ ≤ N (that is, we set ε = ε̂/N ), where N = eβLφ ∑n i=1 ‖h(w (0);i)−h∗i ‖ 2 n(1−4α)αβ + 7eβLφ[D2H2+c(2+(V+3)GD2)2+2+V ] 2α(1−4α) . Then, the total complexity to guarantee min0≤t≤T−1[F (w(t))−F∗] ≤ 1T ∑T−1 t=0 [F (w (t))−F∗] ≤ ε̂ is O ( nN 3β ε̂3 (D 2H2 + (ε̂2/N)) log(Nε̂ ) ) . Remark 2. Corollary 2 shows that O (1/ε̂) outer loop iterations are needed in order to reach an ε̂-global solution, and it proves that each iteration needs the equivalent of O ( n ε̂2 log( 1 ε̂ ) ) gradient computations for computing an approximate solution. In total, Algorithm 2 has total complexity O ( n ε̂3 log( 1 ε̂ ) ) for finding an ε̂-global solution. For a comparison, Stochastic Gradient Descent uses a total of O( 1ε2 ) gradient computations to find a stationary point satisfying E[‖∇F (ŵ)‖2] ≤ ε for non-convex problems (Ghadimi & Lan, 2013). Gradient Descent has a better complexity in terms of ε, i.e. O(nε ) such that ‖∇F (ŵ)‖ 2 ≤ ε (Nesterov, 2004). However, both methods may not be able to reach a global solution of (1). In order to guarantee global convergence for nonconvex settings, one may resort to use Polyak-Lojasiewicz (PL) inequality (Karimi et al., 2016; Gower et al., 2021). This assumption is widely known to be strong, which implies that every stationary point is also a global minimizer. 6 FURTHER DISCUSSION AND CONCLUSIONS This paper presents an alternative composite formulation for solving the finite-sum optimization problem. Our formulation allows a new way of exploiting the structure of machine learning problems and the convexity of squared loss and softmax cross entropy loss, and leads to a novel algorithmic framework that guarantees global convergence (when the outer loss functions are convex and Lipschitz-smooth). Our analysis is general and can be applied to various different learning architectures, in particular, our analysis and assumptions match practical neural networks; in recent years, there has been a great interest in the structure of deep learning architectures for over-parameterized settings (Arora et al., 2018; Allen-Zhu et al., 2019; Nguyen & Mondelli, 2020). Algorithm 2 demonstrates a gradient method to solve the regularized problem, however, other methods can be applied to our framework (e.g. conjugate gradient descent). Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. Possible research directions include more practical algorithm designs based on our Framework 1, and different related methods to solve the regularized problem and approximate the solution. This potentially leads to a new class of efficient algorithms for machine learning problems. This paper presents a new perspective to the research community. ETHICS STATEMENT This paper does not contain ethics concerns. APPENDIX A TABLE OF NOTATIONS Notation Meaning F∗ Global minimization function of F in (1) F∗ = minw∈Rd F (w) h∗i h ∗ i = arg minz∈Rc φi(z), i ∈ [n] v (t) ∗ Solution of the convex problem in (10) minv∈Rd 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 v(t) An approximation of v(t)∗ which is used as the search direction in Framework 1 v̂ (t) ∗ε A vector that satisfies 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 ≤ ε2 for some ε > 0 and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. v (t) ∗ reg Solution of the strongly convex problem in (15) minv∈Rd { 1 2 1 n ∑n i=1 ‖η(t)H (t) i v − α (t) i ∇zφi(h(w(t); i))‖2 + ε 2 2 ‖v‖ 2 } B USEFUL RESULTS The following lemmas provide key tools for our results. Lemma 3 (Squared loss). Let b ∈ Rc and define φ(z) = 12‖z − b‖ 2 for z ∈ Rc. Then φ is convex and Lφ-smooth with Lφ = 1. Lemma 4 (Softmax cross-entropy loss). Let index a ∈ [c] and define φ(z) = log [ c∑ k=1 exp(zk − za) ] = log [ c∑ k=1 exp(w>k z) ] , for z = (z1, . . . , zc)> ∈ Rc, wherewk = ek−ea with ei representing the i-th unit vector (containing 1 at the i-th position and 0 elsewhere). Then φ is convex and Lφ-smooth with Lφ = 1. The following lemma is a standard result in (Nesterov, 2004). Lemma 5 ((Nesterov, 2004)). If φ is Lφ-smooth and convex, then for ∀z ∈ Rc, ‖∇φ(z)‖2 ≤ 2Lφ(φ(z)− φ(z∗)), (21) where z∗ = arg minz φ(z). The following useful derivations could be used later in our theoretical analysis. Since φi is convex, by Definition 2 we have φi(h(w; i)) ≥ φi(h(w′; i)) + 〈 ∇zφi(z) ∣∣∣ z=h(w′;i) , h(w; i)− h(w′; i) 〉 . (22) If φi is convex and Lφ-smooth, then by Lemma 5∥∥∥∥∇zφi(z)∣∣∣ z=h(w;i) ∥∥∥∥2 ≤ 2Lφ [φi(h(w; i))− φi(h∗i )] , (23) where h∗i = arg minz∈Rc φi(z). We compute gradients of f(w; i) in term of φi(h(w; i)). • Gradient of softmax cross-entropy loss: ∇φi(z) ∣∣ z=h(w;i) = ( ∂φi(z) ∂z1 ∣∣∣ z=h(w;i) , . . . , ∂φi(z) ∂zc ∣∣∣ z=h(w;i) )> , where for j ∈ [c], ∂φi(z) ∂zj ∣∣∣ z=h(w;i) = exp ( [h(w;i)]j−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j 6= I(y(i)) − ∑ k 6=I(y(i)) exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) ∑c k=1 exp ( [h(w;i)]k−[h(w;i)]I(y(i)) ) , j = I(y(i)) . (24) • Gradient of squared loss: ∇φi(z) ∣∣ z=h(w;i) = h(w; i)− y(i). (25) C ADDITIONAL DISCUSSION C.1 ABOUT ASSUMPTION 2 We make a formal assumption for the case h(·; i) is closely approximated by k(·; i). Assumption 5. We assume that for all i ∈ [n] there exists some approximations k(w; i) : Rd → Rc such that |kj(w; i)− hj(w; i)| ≤ ε, ∀w ∈ Rd, i ∈ [n] and j ∈ [c], (26) where k(·; i) are twice continuously differentiable (i.e. the second-order partial derivatives of all scalars kj(·; i) are continuous for all i ∈ [n]), and that their Hessian matrices are bounded: ‖Mi,j(w)‖ = ‖Jw (∇wkj(w; i))‖ ≤ G, ∀w ∈ Rd, i ∈ [n] and j ∈ [c]. (27) Assumption 5 allows us to prove the following Lemma that bound the error in equation (9): Lemma 6. Suppose that Assumption 5 holds for the classifier h. Then for all i ∈ [n] and 0 ≤ t < T , we have: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , (28) where H(t)i is defined to be the Jacobian matrix of the approximation k(w; i) at w (t): H (t) i := Jwk(w; i)|w=w(t) = ∂k1(w;i) ∂w1 . . . ∂k1(w;i)∂wd . . . . . . . . . ∂kc(w;i) ∂w1 . . . ∂kc(w;i)∂wd ∣∣∣∣∣ w=w(t) ∈ Rc×d. (29) Additionally we have, | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. (30) Note that these result recover the case when h(·; i) is itself smooth. Hence we analyze our algorithms using the result of Lemma 6, which generalizes the result from Lemma 1. C.2 ABOUT ASSUMPTION 3 In this section, we justify the existence of the search direction in Assumption 3 (almost surely). We argue that there exists a vector v̂(t)∗ε satisfying 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2. It is sufficient to find a vector v satisfying that η(t)H (t) i v = α (t) i ∇zφi(h(w (t); i)) for every i ∈ [n]. Since the solution v is in Rd and ∇zφi(h(w(t); i)) is in Rc, this condition is equivalent to a linear system with n ·c constraints and d variables. LetA and b be the following stacked matrix and vector: A = H (t) 1 η (t) . . . H (t) n η(t) ∈ Rn·c×d, and b = α (t) 1 ∇zφ1(h(w(t); i)) . . . α (t) n ∇zφn(h(w(t); i)) ∈ Rn·c, then the problem reduce to finding the solution of the equation Av = b. In the over-parameterized setting where dimension d is sufficiently large (d n · c), then rank A = n · c almost surely and there exists almost surely a vector v that interpolates all the training set. To demonstrate this fact easier, we consider a simple neural network where the classifier h(w; i) is formulated as h(w; i) = W (2)>σ(W (1)>x(i)), where c = 1, W (1) ∈ Rm×l and W (2) ∈ Rl×1, w = vec({W (1),W (2)}) ∈ Rd is the vectorized weight where d = l(m+ 1) and σ is sigmoid activation function. H (t) i is defined to be the Jacobian matrix of h(w; i) at w (t): H (t) i := Jwh(w; i)|w=w(t) = [ ∂h(w;i) ∂w1 . . . ∂h(w;i)∂wd ] ∣∣∣∣∣ w=w(t) ∈ R1×d, then A = η(t) H (t) 1 . . . H (t) n = η(t) ∂h(w;1) ∂w1 . . . ∂h(w;1)∂wd . . . . . . . . . ∂h(w;n) ∂w1 . . . ∂h(w;n)∂wd ∈ Rn×d. We want to show that A has full rank, almost surely. We consider the over-parameterized setting where the last layer has at least n neuron (i.e. l = n and the simple version when c = 1. We argue that rank of matrix A is greater than or equal to rank of the submatrix B created by the weights of the last layer W (2) ∈ Rn: B = ∂h(w;1) ∂W (2) 1 . . . ∂h(w;1) ∂W (2) n . . . . . . . . . ∂h(w;n) ∂W (2) 1 . . . ∂h1(w;n) ∂W (2) n ∈ Rn×n. Note that h(·, i) is a linear function of the last weight layers (in this simple case W (2) ∈ Rn and σ(W (1)>x(i)) ∈ Rn), we can compute the partial derivatives as follows: ∂h(w; i) ∂W (2) = σ(W (1)>x(i)); i ∈ [n]. Hence B = σ(W (1)>x(1)) . . . σ(W (1)>x(n)) ∈ Rn×n. Assuming that there are no identical data, and σ is the sigmoid activation, the set of weights W (1) that make matrix B degenerate has measure zero. Hence B has full rank almost surely, and we have the same conclusion for A. Therefore we are able to prove the almost surely existence of a solution v of the linear equation Av = b for simple two layers network. Using the same argument, this result can be generalized for larger neural networks where the dimension d is sufficiently large (d nc). C.3 INITIALIZATION EXAMPLE Our Assumption 3 requires a nice dependency on the tolerance ε for the gradient matrices H(0)i and ∇zφi(h(w(0); i)). We note that at the starting point t = 0, these matrices may depend on ε due to the initialization process and the dependence of d on ε. In order to accommodate the choice of learning rate η(0) = D √ ε in our theorems, in this section we describe a network initialization that satisfies ‖H(0)i ‖ = Θ ( 1√ ε ) where the gradient norm ‖∇zφi(h(w(0); i))‖ is at most constant order with respect to ε. To simplify the problem, we only consider small-dimension data and networks without activation. About the target vector: We choose φi to be the softmax cross-entropy loss. By Lemma 7 (see below), we have that the gradient norm is upper bounded by a constant c, where c is the output dimension of the problem and is not dependent on ε. Note that when we stack all gradients for n data points, then the size of new vector is still not dependent on ε. About the network architecture: For simplicity, we consider the following classification problem where • The input data is in R2. There are only two data points {x(1), x(2)}. Input data is bounded and non-degenerate (we will clarify this property later). • The output data is (categorical) in R2: {y(1) = (1, 0), y(2) = (0, 1)}. We want to have an over-parameterized setting where the dimension of weight vector is at least nc = 4. We consider a simple network with two layers, no biases and no activation functions. Let the number of neurons in the hidden layer bem. The flow of this network is (in) R2 → Rm → R2 (out). First, we consider the case where m = 1. • The first layer has 2 parameters (w1, w2) and only 1 neuron that outputs z(i) = w1x (i) 1 + w2x (i) 2 (the subscript is for the coordinate of input data x (i)). • The second layer has 2 parameters (w3, w4). The final output is h(w, i) = [w3(w1x (i) 1 + w2x (i) 2 ), w4(w1x (i) 1 + w2x (i) 2 )] > ∈ R2, with w = [w1, w2, w3, w4]> ∈ R4. This network satisfies that the Hessian matrices of h(w; i) are bounded. Let Q and b be the following stacked matrix and vector: Q = [ H (0) 1 H (0) 2 ] ∈ R4×4, and b = [ ∇zφ1(h(w(0); 1)) ∇zφ2(h(w(0); 2)) ] ∈ R4, Then we have the following: Q = Q(w) = [ H (0) 1 H (0) 2 ] = ∇w[w3(w1x(1)1 + w2x (1) 2 )] ∇w[w4(w1x(1)1 + w2x (1) 2 )] ∇w[w3(w1x(2)1 + w2x (2) 2 )] ∇w[w4(w1x(2)1 + w2x (2) 2 )] = w3x (1) 1 w3x (1) 2 w1x (1) 1 + w2x (1) 2 0 w4x (1) 1 w4x (1) 2 0 w1x (1) 1 + w2x (1) 2 w3x (2) 1 w3x (2) 2 w1x (2) 1 + w2x (2) 2 0 w4x (2) 1 w4x (2) 2 0 w1x (2) 1 + w2x (2) 2 . The determinant of this matrix is a polynomial of the weight w and the input data. Under some mild non-degenerate condition of the input data, we can choose some base point w′ that made this matrix invertible (note that if this condition is not satisfied, we can rescale/add a very small noise to the data - which is the common procedure in machine learning). Hence the system Qu = b always has a solution. Now we consider the following two initializations: 1. We choose to initialize the starting point at w(0) = 1√ ε w′ and note that Q(w) is a linear function of w and Q(w′) is independent of ε. Then the norm of matrix Q(w(0)) has the same scale with 1√ ε . 2. Instead of choosing m = 1, we consider an over-parameterized network where m = 1ε (recall that m is the number of neurons in the hidden layer). The hidden layer in this case is: z = z (i) 1 = w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 . . . z (i) m = w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 . The output layer is:{ y (i) 1 = z (i) 1 w (2) 1,1 + · · ·+ z (i) m w (2) m,1 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,1 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,1 y (i) 2 = z (i) 1 w (2) 1,2 + · · ·+ z (i) m w (2) m,2 = (w (1) 1,1x (i) 1 + w (1) 2,1x (i) 2 )w (2) 1,2 + · · ·+ (w (1) 1,mx (i) 1 + w (1) 2,mx (i) 2 )w (2) m,2 with w = [w(1)1,1, . . . , w (1) 1,m, w (1) 2,1, . . . , w (1) 2,m, w (2) 1,1, w (2) 1,2, . . . , w (2) m,1, w (2) m,2] > ∈ R4m. Hence, Q(w) = w (2) 1,1x (1) 1 . . . w (2) m,1x (1) 1 w (2) 1,1x (1) 2 . . . w (2) m,1x (1) 2 z (1) 1 0 . . . z (1) m 0 w (2) 1,2x (1) 1 . . . w (2) m,2x (1) 1 w (2) 1,2x (1) 2 . . . w (2) m,2x (1) 2 0 z (1) 1 . . . 0 z (1) m w (2) 1,1x (2) 1 . . . w (2) m,1x (2) 1 w (2) 1,1x (2) 2 . . . w (2) m,1x (2) 2 z (2) 1 0 . . . z (2) m 0 w (2) 1,2x (2) 1 . . . w (2) m,2x (2) 1 w (2) 1,2x (2) 2 . . . w (2) m,2x (2) 2 0 z (2) 1 . . . 0 z (2) m . Hence, the number of (possibly) non-zero elements in each row is 3m = 3ε . For matrix A of rank r, we have ‖A‖2 ≤ ‖A‖F ≤ √ r‖A‖2. Since the rank of Q(w) is at most 4 (nc = 4, independent of ε), we only need to find the Frobenius norm of Q(w). We have ‖Q(w)‖F = √√√√ 4∑ i=1 4m∑ j=1 |qij |2. Let qmin and qmax be the element with smallest/largest magnitude of Q(w). Suppose that x(i) 6= (0, 0) and choose w 6= 0 such that z 6= 0, qmin > 0 and independent of ε. Hence, √ 8√ ε |qmin| ≤ ‖Q(w)‖F ≤ √ 12√ ε |qmax|. Hence, ‖Q(w)‖ = Θ ( 1√ ε ) . Therefore this simple network initialization supports the dependence on ε for our Assumption 3. We note that a similar setting is found in (Allen-Zhu et al., 2019), where the authors initialize the weights using a random Gaussian distribution with a variance depending on the dimension of the problem. In non-convex setting, they prove the convergence of SGD using the assumption that the number of neurons m depends inversely on the tolerance ε. Lemma 7. For softmax cross-entropy loss, and x = h(w; i) ∈ Rc, for ∀w ∈ Rd and i ∈ [n], we have ∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 ≤ c. (31) Proof. By (24), we have for i = 1, . . . , n, • For j 6= I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 = ( exp ( [h(w; i)]j − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1. • For j = I(y(i)):( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 = (∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) )∑c k=1 exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) )2 = ( ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ) 1 + ∑ k 6=I(y(i)) exp ( [h(w; i)]k − [h(w; i)]I(y(i)) ))2 ≤ 1 Hence, for i = 1, . . . , n,∥∥∥∥∇zφi(x)∣∣∣ x=h(w;i) ∥∥∥∥2 = c∑ j=1 ( ∂φi(x) ∂xj ∣∣∣ x=h(w;i) )2 ≤ c. This completes the proof. D PROOFS OF LEMMAS AND COROLLARY 1 PROOF OF LEMMA 1 Proof. Since h(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs hj(·; i) where j ∈ [c] and i ∈ [n]: hj(w (t+1); i) = hj(w (t) − η(t)v(t); i) = hj(w (t); i)− Jwhj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (32) where Mi,j(w̃(t)) is the Hessian matrices of hj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. This leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖ (11) ≤ 1 2 (η(t))2‖v(t)‖2G, j ∈ [c]. PROOF OF LEMMA 2 Proof. From Assumption 3, we know that there exists v̂(t)∗ε so that 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 ≤ ε2, and ‖v̂(t)∗ε ‖2 ≤ V , for some V > 0. Hence, 1 2 1 n n∑ i=1 ‖η(t)H(t)i v̂ (t) ∗ε − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v̂(t)∗ε ‖2 ≤ ε2 + ε2 2 V = (1 + V 2 )ε2. Since v(t)∗ reg is the optimal solution of the problem in (15) for 0 ≤ t < T , we have 1 2 1 n n∑ i=1 ‖η(t)H(t)i v (t) ∗ reg − α(t)i ∇zφi(h(w (t); i))‖2 + ε 2 2 ‖v(t)∗ reg‖2 ≤ (1 + V 2 )ε2. Therefore, we have (17) and ‖v(t)∗ reg‖2 ≤ 2 + V for 0 ≤ t < T . PROOF OF LEMMA 3 Proof. 1. We want to show that for any α ∈ [0, 1] φ(αz1 + (1− α)z2) ≤ αφ(z1) + (1− α)φ(z2), ∀z1, z2 ∈ Rc, (33) in order to have the convexity of φ with respect to z (see (Nesterov, 2004)). For any α ∈ [0, 1], we have for ∀z1, z2 ∈ Rc, α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − ‖α(z1 − b) + (1− α)(z2 − b)‖2 = α‖z1 − b‖2 + (1− α)‖z2 − b‖2 − α2‖z1 − b‖2 − (1− α)2‖z2 − b‖2 − 2α(1− α)〈z1 − b, z2 − b〉 ≥ α(1− α)‖z1 − b‖2 + (1− α)α‖z2 − b‖2 − 2α(1− α)‖z1 − b‖ · ‖z2 − b‖ = α(1− α) (‖z1 − b‖ − ‖z2 − b‖)2 ≥ 0, where the first inequality follows according to Cauchy-Schwarz inequality 〈a, b〉 ≤ ‖a‖·‖b‖. Hence, 1 2 ‖αz1 + (1− α)z2 − b‖2 ≤ α 2 ‖z1 − b‖2 + (1− α) 2 ‖z2 − b‖2. Therefore, (33) implies the convexity of φ with respect to z. 2. We want to show that ∃Lφ > 0 such that ‖∇φ(z1)−∇φ(z2)‖ ≤ Lφ‖z1 − z2‖, ∀z1, z2 ∈ Rc. (34) Notice that∇φ(z) = z − b, then clearly ∀z1, z2 ∈ Rc, ‖∇φ(z1)−∇φ(z2)‖ = ‖z1 − z2‖. Therefore, (34) implies the Lφ-smoothness of φ with respect to z with Lφ = 1. PROOF OF LEMMA 4 Proof. 1. For ∀z1, z2 ∈ Rc and 1 ≤ k ≤ c, denote uk,1 = exp(w>k z1) and uk,2 = exp(w>k z2) and using Holder inequality c∑ k=1 ak · bk ≤ ( c∑ k=1 |ak|p ) 1 p ( c∑ k=1 |bk|q ) 1 q , where 1 p + 1 q = 1, (35) we have φ(αz1 + (1− α)z2) = log [ c∑ k=1 exp(w>k (αz1 + (1− α)z2)) ] = log [ c∑ k=1 uαk,1 · u (1−α) k,2 ] (35) ≤ log ( c∑ k=1 u α· 1α k,1 )α( c∑ k=1 u (1−α)· 1 (1−α) k,2 )1−α = α log [ c∑ k=1 exp(w>k z1) ] + (1− α) log [ c∑ k=1 exp(w>k z2) ] = αφ(z1) + (1− α)φ(z2), where the first inequality since log(x) is an increasing function for ∀x > 0 and exp(v) > 0 for ∀v ∈ R. Therefore, (33) implies the convexity of φ with respect to z. 2. Note that ‖∇2φ(z)‖ ≤ Lφ if and only if φ(z) is Lφ-smooth (see (Nesterov, 2004)). First, we compute gradient of φ(z): • For i 6= a: ∂φ(z) ∂zi = exp(zi − za)∑c k=1 exp(zk − za) . • For i = a: ∂φ(z) ∂zi = − ∑ k 6=a exp(zk − za)∑c k=1 exp(zk − za) = − ∑c k=1 exp(zk − za) + 1∑c k=1 exp(zk − za) = −1 + 1∑c k=1 exp(zk − za) = −1 + exp(zi − za)∑c k=1 exp(zk − za) . We then calculate ∂ 2φ(z) ∂zj∂zi = ∂∂zj ( ∂φ(z) ∂zi ) • For i = j: ∂2φ(z) ∂zj∂zi = exp(zi − za)[ ∑c k=1 exp(zk − za)]− exp(zi − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 = exp(zi − za)[ ∑c k=1 exp(zk − za)− exp(zi − za)] [ ∑c k=1 exp(zk − za)]2 . • For i 6= j: ∂2φ(z) ∂zj∂zi = − exp(zj − za) exp(zi − za) [ ∑c k=1 exp(zk − za)]2 . Denote that yi = exp(zi − za) ≥ 0, i ∈ [c], we have: • For i = j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = ∣∣∣∣yi(∑ck=1 yk − yi)(∑ck=1 yk)2 ∣∣∣∣ . • For i 6= j: ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣ = |yiyj |(∑ck=1 yk)2 . Recall that for matrix A = (aij) ∈ Rc×c: ‖A‖2 ≤ ‖A‖2F = ∑c i=1 ∑c j=1 |aij |2. We have: c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1(∑ck=1 yk)4 y2i ( c∑ k=1 yk − yi)2 + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 y2i ( c∑ k=1 yk) 2 − 2y2i c∑ k=1 yk.yi + y 4 i + ∑ j 6=i (yiyj) 2 = 1 ( ∑c k=1 yk) 4 [ y2i ( c∑ k=1 yk) 2 − 2y3i c∑ k=1 yk + y 2 i c∑ k=1 y2k ] Therefore, ‖∇2φ(z)‖2 ≤ c∑ i=1 c∑ j=1 ∣∣∣∣∂2φ(z)∂zj∂zi ∣∣∣∣2 ≤ 1 ( ∑c k=1 yk) 4 [ ( c∑ i=1 y2i )( c∑ k=1 yk) 2 − 2( c∑ i=1 y3i )( c∑ k=1 yk) + ( c∑ i=1 y2i )( c∑ k=1 y2k) ] ≤ ( ∑c i=1 y 2 i )( ∑c k=1 yk) 2 ( ∑c k=1 yk) 4 ≤ ( ∑c k=1 yk) 4 ( ∑c k=1 yk) 4 = 1, where the last inequality holds since ( c∑ i=1 y2i )( c∑ k=1 y2k) ≤ ( c∑ i=1 y3i )( c∑ k=1 yk)⇔ ( c∑ k=1 y2k) ≤ √√√√( c∑ i=1 y3i )( c∑ k=1 yk), which follows by the application of Holder inequality (35) with p = 2, q = 2, ak = y 3/2 k , and bk = y 1/2 k (Note that yk ≥ 0, k ∈ [c]). Hence, ‖∇2φ(z)‖ ≤ Lφ with Lφ = 1 which is equivalent to Lφ-smoothness of φ. PROOF OF LEMMA 6 Proof. Since k(·; i) are twice continuously differentiable for all i ∈ [n], we have the following Taylor approximation for each component outputs kj(·; i) where j ∈ [c] and i ∈ [n]: kj(w (t+1); i) = kj(w (t) − η(t)v(t); i) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)), (36) where Mi,j(w̃(t)) is the Hessian matrices of kj(·; i)at w̃(t) and w̃(t) = αw(t) + (1 − α)w(t+1) for some α ∈ [0, 1]. Shifting this back to the original function hj(·; i) we have: hj(w (t+1); i) = kj(w (t+1); i) + (hj(w (t+1); i)− kj(w(t+1); i)) (36) = kj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)), = hj(w (t); i)− Jwkj(w; i)|w=w(t)η(t)v(t) + 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), which leads to our desired statement: h(w(t+1); i) = h(w(t) − η(t)v(t); i) = h(w(t); i)− η(t)H(t)i v (t) + (t) i , where (t) i,j = 1 2 (η(t)v(t))>Mi,j(w̃ (t))(η(t)v(t)) + (hj(w (t+1); i)− kj(w(t+1); i)) + (kj(w(t); i)− hj(w(t); i)), j ∈ [c], Hence we get the final bound: | (t)i,j | ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣ + |hj(w(t+1); i)− kj(w(t+1); i)|+ |kj(w(t); i)− hj(w(t); i)| (26) ≤ 1 2 ∣∣∣(η(t)v(t))>Mi,j(w̃(t))(η(t)v(t))∣∣∣+ 2ε, ≤ 1 2 (η(t))2‖v(t)‖2 · ‖Mi,j(w̃(t))‖+ 2ε (11) ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε, j ∈ [c]. PROOF OF COROLLARY 1 Proof. The proof of this corollary follows directly by the applications of Lemmas 3 and 4. E TECHNICAL PROOFS FOR THEOREM 1 Lemma 8. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0, and v(t) = v (t) ∗ reg. Consider η(t) = D √ ε for some D > 0 and ε > 0. For i ∈ [n] and 0 ≤ t < T , we have ‖ (t)i ‖ 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. (37) Proof. From (14), for i ∈ [n], j ∈ [c], and for 0 ≤ t < T , by Lemma 1 and Lemma 6 we have | (t)i,j | ≤ 1 2 (η(t))2‖v(t)‖2G+ 2ε ≤ 1 2 (V + 2)GD2ε+ 2ε = 1 2 ε(4 + (V + 2)GD2), where the last inequality follows by the fact ‖v(t)‖2 = ‖v(t)∗ reg‖2 ≤ 2 + V of Lemma 2 and η(t) = D √ ε. Hence, ‖ (t)i ‖ 2 = c∑ j=1 | (t)i,j | 2 ≤ 1 4 c(4 + (V + 2)GD2)2ε2. Lemma 9. Let w(t) be generated by Algorithm 1 where we use the closed form solution for the search direction. We execute Algorithm 1 for T = βε outer loops for some constant β > 0. We assume Assumption 1 holds. Suppose that Assumption 2 holds for G > 0 and Assumption 3 holds for V > 0. We set the step size equal to η(t) = D
1. What is the main contribution of the paper regarding optimization methods for nonconvex finite sum problems? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of mathematical errors and convergence analysis? 3. Do you have any questions or concerns regarding the proof of Theorems 1-2, Lemmas 1-2, Remark 1, and the use of "apparently" statements? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new optimization method for finding global minima of nonconvex finite sum problems. In particular, the summands are functions of the form ϕ i ∘ h where ϕ i is convex and Lipschitz smooth, while h is nonconvex. Each iteration of the method consists of solving an auxiliary regularized least squares (RLS) problem, followed by a gradient step. Additional analysis is given for when the RLS problem is solved inexactly. Finally, a claimed O ~ ( ε − 3 ) complexity is established under strong boundedness assumptions on various solution sets. Review Strengths: Overall, the paper is well-written, the problem is well-motivated, and the background material is sufficient. Weaknesses: Unfortunately, I will have to recommend rejecting this paper due to a critical mathematical error, namely, the constant V used in the proof of Theorems 1-2 does not have the proper dependence on the tolerance ε . To elaborate, notice that the stepsize η ( t ) in Theorems 1-2 is Θ ( ε ) and that Assumption 3 (essentially) states that V is a scalar where | v ( t ) | ≤ V , | η ( t ) Q ( t ) v ( t ) − b ( t ) | = O ( ε ) , t ≥ 1 , for matrices Q ( t ) and vectors v ( t ) and b ( t ) . Even under the generous assumption that Q ( t ) and b ( t ) are bounded, the assumption that η ( t ) = Θ ( ε ) must imply that, at the very least, V = Ω ( 1 / ε ) . It then follows the right-hand-side of the bound in equation (18), and similarly in Theorem 2, is no longer Θ ( ε ) but rather Ω ( 1 ) due to the V 2 ε term. As a consequence, the convergence analysis in the paper is no longer valid as it can no longer be shown that min s ≤ T [ F ( w ( s ) ) − F ∗ ] = Θ ( ε ) Besides the above major issue, I have a few minor issues with some of the other material in the paper: [p. 4-5] Lemmas 1-2 are classical results and should be replaced by citations to avoid giving the impression that these are new contributions. [p. 5, 6, 14] The use of "apparently" in several important statements in this paper implies that these statements have some degree of ambiguity to them. This kind of wording should be removed if the authors are making a formal argument, e.g. Appendix C.2, or the authors should be explicit about their lack of knowledge. [p. 6] What is the definition of "smooth" in Remark 1? If it means continuously differentiable, then clearly | J w ( ∇ w h j ( w ( t ) ; I ) ) | can be unbounded even if w ( t ) is bounded, e.g., consider the univariate function h j ( w ; i ) = w 3 / 2 on [ 0 , 1 ] . Minor typos: [p. 3] ... this paper focuses on deep neural networks ... [p. 9] ... for non-convex problems (Ghadimi & Lan, 2013) ... [p. 14] ... this result can be generalized for larger ... EDIT: The authors have uploaded a revised version of their manuscript that has addressed my main concerns. The only issues that prevent me from assigning a higher score to the paper are: (i) a lack of numerical experiments and (ii) the use of non-standard assumptions (in particular Assumption 3) that are difficult to verify in practice.
ICLR
Title Effective Offline Reinforcement Learning via Conservative State Value Estimation Abstract Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. N/A Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. 1 INTRODUCTION Reinforcement Learning (RL), which learns to act by interacting with the environment, has achieved remarkable success in various tasks. However, in most real applications, it is impossible to learn online from scratch as exploration is often risky and unsafe. Instead, offline RL((Fujimoto et al., 2019; Lange et al., 2012)) avoids this problem by learning the policy solely from historical data. However, the naive approach, which directly uses online RL algorithms to learn from a static dataset, suffers from the problems of value over-estimation and policy extrapolation on OOD (out-of-distribution) states or actions. Recently, conservative value estimation, being conservative on states and actions where there are no enough samples, has been put forward as a principle to effectively solve offline RL ((Shi et al., 2022; Kumar et al., 2020; Buckman et al., 2020). Prior methods, e.g., Conservative Q-Learning (CQL Kumar et al. (2020)), avoid the value over-estimation problem by systematically underestimating the Q values of OOD actions on the states in the dataset. In practice, it is often too pessimistic and thus leads to overly conservative algorithms. COMBO (Yu et al., 2021) leverages a learnt dynamic model to augment data in an interpolation way, and then learn a Q function that is less conservative than CQL and derives a better policy in potential. In this paper, we propose CSVE(Conservative State Value Estimation), a new offline RL approach. Unlike the above traditional methods that estimate conservative values by penalizing Q-function on OOD states or actions, CSVE directly penalizing the V-function on OOD states. We prove in theory that CSVE has tighter bounds on true state values than CQL, and same bounds as COMBO but under more general discounted state distributions which leads to more space for algorithm design. Our main contributions are as follows. • The conservative state value estimation with related theoretical analysis. We prove that it lower bounds the real state values in expectation over any state distribution that is used to sample OOD states, and is up-bounded by the real values in expectation over the marginal state distribution of the dataset plus a constant term depending on only sampling errors. Compared to prior work, it has several advantages to derive a better policy in potential. • A practical Actor-Critic implementation. It approximately estimates the conservative state values in the offline context and improves the policy via advantage weighting updates. In particular, we use a dynamics model to generalize over in-distribution space and sample OOD states that are directly reachable from the dataset. • Experimental evaluation on continuous control tasks of Gym (Brockman et al., 2016) and Adroit (Rajeswaran et al., 2017) in D4RL (Fu et al., 2020) benchmarks, showing that CSVE performs better than prior methods based on conservative Q-value estimation, and is strongly competitive among main SOTA offline RL algorithms. 2 PRELIMINARIES Offline Reinforcement Learning. Consider the Markov Decision Process M := (S,A, P, r, ρ, γ), which consists of the state space S, the action spaceA, the transition model P : S×A → ∆(S), the reward function r : S × A → R, the initial state distribution ρ and the discount factor γ ∈ (0, 1]. A stochastic policy π : S → ∆(A) takes an action in probability given the current state. A transition is the tuple (st, at, rt, st+1) where at ∼ π(·|st), st+1 ∼ P (·|st, at) and rt = r(st, at). We assume that the reward values satisfy |r(s, a)| ≤ Rmax,∀s, a. A trajectory under π is the random sequence τ = (s0, a0, r0, s1, a1, r1, . . . , sT ) which consists of continuous transitions starting from s0 ∼ ρ. The standard RL is to learn a policy π ∈ Π that maximize the future cumulative rewards Jπ(M) = EM,π[ ∑∞ t=0 γ trt] via active interaction with the environment M . At any time t, for the policy π, the value function of state is defined as V π(s) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s], and the Q value function is Qπ(s, a) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s, at = a]. The Bellman operator is a function projection: BπQ(s, a) := r(s, a) + γEs′∼P (·|s,a),a′∼π(·|s′)[Q(s′, a′)], or BπV (s) := Ea∼π(·|s)[r(s, a) + γEs′∼P (·|s,a)[V (s′)]], which leads to iterative value updates. Bellman consistency implies that V π(s) = BπV π(s),∀s and Qπ(s) = BπQπ(s, a),∀s, a. In practice with function approximation, we use the empirical Bellman operator B̂π where the former expectations are estimated with data. The offline RL is to learn the policy π from a static dataset D = {(s, a, r, s′)} consisting of transitions collected by any behaviour policy, aiming to behave well in the online environment. Note that, unlike the standard online RL, offline RL cannot interact with the environment during learning. Conservative Value Estimation. One main challenge in offline RL is the over-estimation of values introduced by extrapolation on unseen states and actions, which may make the learned policy collapse. To address this issue, conservatism or pessimism are used in value estimation, e.g. CQL learns a conservative Q-value function by penalizing the value of unseen actions on states: Q̂k+1 ← argmin Q α (Es∼D,a∼µ(a|s)[Q(s, a)]− Es∼D,a∼π̂β(a|s)[Q(s, a)]) + 1 2 Es,a,s′∼D[(Q(s, a)− β̂πQ̂k(s, a))2] (1) where π̂β and π are the behaviour policy and learnt policy separately, µ is any arbitrary policy different from π̂β , and α the factor for trade-off of conservatism. Constrained Policy Optimization. To address the issues of distribution drift between learning policy and behaviour policy, one approach is to constrain the learning policy close to the behaviour policy (Bai et al., 2021; Wu et al., 2019; Nair et al., 2020; Levine et al., 2020; Fujimoto et al., 2019). Here we take Advantage Weighted Regression(Peng et al. (2019b); Nair et al. (2020)) which adopts an implicit KL divergence to constrain the distance of policies as example: πk+1 ← argmax π Es,a∼D [ log π(a|s) 1 Z(s) exp ( 1 λ Aπ k (s, a) )] (2) where Aπ k is the advantage of policy πk, and Z the normalization constant for s. Model-based Offline RL. In RL, the model is an approximation of the MDP M . We denote a model as M̂ := (S,A, P̂ , r̂, ρ, γ), where P̂ and r̂ are approximations of P and r respectively. In the setting of offline RL, the model is used to roll out and augment data (Yu et al., 2020; 2021) or act as a surrogate of real environment to interact with agent (Kidambi et al., 2020). In this paper, we use model to sample the next states that are approximately reachable from the dataset. 3 CONSERVATIVE STATE VALUE ESTIMATION In the offline setting, the value overestimation is a major problem resulting in failure of learning a reasonable policy (Levine et al., 2020; Fujimoto et al., 2019). In contrast to prior works(Kumar et al., 2020; Yu et al., 2021) that get conservative value estimation via penalizing Q function for OOD state-action pairs , we directly penalize V function for OOD states. Our approach provides several novel theoretic results that allow better trade-off of conservative value estimation and policy improvement. All proofs of our theorems can be found in Appendix A. 3.1 CONSERVATIVE OFF-POLICY EVALUATION Our approach is an alternative approach to CQL(Kumar et al., 2020). Instead of learning a conservative Q function, we aim to conservatively estimate the value V π(s) of a target policy π given a dataset D to avoid overestimation of out-of-distribution states. To achieve this, we penalize the V-values evaluated on states that is more likely to be out-of-distribution and pushing up the V-values on states that is in the distribution of the dataset, which is achieved through the following iteration: V̂ k+1 ← argmin V 1 2 Es∼du(s)[(B̂πV̂ k(s)− V (s))2] + α(Es′∼d(s)V (s′)− Es∼du(s)V (s)) (3) where du(s) is the discounted state distribution of D, d(s) is any state distribution, and B̂π is the empirical Bellman operator (see appendix for more details). Considering the setting without function approximation, by setting the derivative of Eq. 3 as zero, the V function found by approximate dynamic programming in iteration k can be obtained: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1], ∀s, k. (4) Denote the function projection on V̂ k in Eq. 4 as T π . We have Lemma 1, and thus V̂ k converges to a unique fixed point. Lemma 1. For any d with supp d ⊆ supp du, T π is a γ-contraction in L∞ norm. Theorem 1. For any d with supp d ⊆ supp du (d ̸= du), with a sufficiently large α (i.e., α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| /Es∼d(s)[ d(s)du(s) − 1])), the expected value of the estimation V̂ π(s) under d(s) is the lower bound of the true value, that is: Es∼d(s)[V̂ π(s)] ≤ Es∼d(s)[V π(s)]. V̂ π(s) = limk→∞ V̂ k(s) is the converged value estimation with the datasetD, and Cr,t,δRmax (1−γ) √ |D(s,a)| is related to sampling error introduced by the use empirical rather than Bellman operator. If the counts of each state-action pair is greater than zero, |D(s, a)| denotes a vector of size |S||A| containing counts for each state-action pair. If the counts of this state action pair is zero, the corresponding 1√ |D(s,a)| is large but finite value. We assume that with probability ≥ 1 − δ, the sampling error is less than Cr,t,δRmax (1−γ) √ |D(s,a)| , while Cr,t,δ is a constant (See appendix for more details.) Note that if the sampling error is ignorable, α > 0 can guarantee the lower bound results. Theorem 2. The expected value of the estimation V̂ π(s) under the state distribution of the original dataset is the lower bound of the true value plus the term of irreducible sampling error, that is: Es∼du(s)[V̂ π(s)] ≤ Es∼du(s)[V π(s)] + Es∼du(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| . , where Pπ refers to the transition matrix coupled with policy π (see Appendix for details). Now we show that, during iterations, the gap between the value of in-distribution state and out-ofdistribution state in the estimated V-function is higher than in the true V-functions. Theorem 3. At any iteration k, with a large enough α, our method expands the difference in expected V-values under the chosen state distribution and the dataset state distribution, that is: Es∼du(s)[V̂ k(s)]− Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)]− Es∼d(s)[V k(s)]. In the policy extraction part, this property enables our policy to take actions a in state s(s ∼ D) that remains in distribution instead of out of distribution, given that our estimated V-function does not overestimate the erroneous out-of-distribution states compared to the in-distribution states. Now we present four remarks to explain how the above theorems guide applications of Eq. 3 in offline RL algorithms. Remark 1. In Eq. 3, if d = du, the penalty on out-of-distribution states degenerates, which means that the policy should not reach states with low support in data, and consequently never explore the unseen actions at the state. Indeed, AWAC Nair et al. (2020) adopts this setting. We show that with proper choice of d different from du, our method performs better than AWAC in practice. Remark 2. Theorem 2 implies that under du, the marginal state distribution of data, the expectation estimated value of π should either be lower than the true value, or higher than the true value but within a threshold. This fact motivates our advantage weighted policy update method in Eq. 11. Remark 3. Theorem 1implies that under d, say the discounted state distribution of any policy, the expectation estimated value of π should lower bounds the true value. This fact motivates our policy improvement method of unifying advantage weighted update with a bonus exploration in Eq. 12. Remark 4. Theorem 3 states Es∼d(s)[V k(s)] − Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)] − Es∼du(s)[V̂ k(s)]. That is to say, under the distribution d, the amount of value under-estimation in expectation is larger than that of the behaviour policy du. With proper choice of d, it is safe and effective to derive a new and potentially better policy with V̂ k. Our algorithm choose the distribution of model predictive next-states as d, i.e., s′ ∼ d is implemented by s ∼ D, a ∼ π(·|s), s′ ∼ P̂ (·|s, a), which effectively builds a soft ’river’ with low values around the dataset. Comparison with prior work: CQL (Eq.1), which penalizes Q-function of OOD actions on states in history data, guarantees the lower bounds on state-wise value estimation: V̂ π(s) = Eπ(a|s)(Q̂ π(s, a)) ≤ Eπ(a|s)(Qπ(s, a)) = V π(s) for all s ∈ D. COMBO, which penalizes Qfunction of OOD states and actions of an interpolation of history data and model-based roll-outs, guarantees the lower bound of state value expectation: Es∼µ0 [V̂ π(s)] ≤ Es∼µ0 [V π(s)] where µ0 is the initial state distribution (Remark 1, section A.2 of COMBO Yu et al. (2021)); which is a special case of our result in Theorem 1 when d = µ0. Although both CSVE and COMBO intend to get better performance by releasing conservative estimation guarantee from the state-wise values to expectation of state values, CSVE get the same lower bounds but under more general state distribution. This provide more flexible space for algorithm design, and it is also one main reason of penalizing on V rather than Q. By controlling distance of d to the behaviour policy’s discounted state distribution dβ , CSVE has the potential of more performance improvement. Note that bounding E[V [s]], rather than state-wise V (s), would introduce a more adventurous policy, which would achieves better performance in in-distribution states and have more risk behaviors in OOD states. To deal with that limitation, we introduce a deep ensemble dynamic model to sample the OOD states for better estimation. 3.2 SAFE POLICY IMPROVEMENT GUARANTEES Following prior works (Laroche et al. (2019); Kumar et al. (2020); Yu et al. (2021)), we show that our method has the safe policy improvement guarantees against the data-implied behaviour policy. We first show that our method optimizes a penalized RL empirical objective: Theorem 4. Let V̂ π be the fixed point of Equation 3, then π∗(a|s) = argmaxπ V̂ π(s) is equivalently obtained by solving: π∗(a|s)← argmax π J(π, M̂)− α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (5) Building upon Theorem 4, we show that our method provides a ζ-safe policy improvement over πβ Theorem 5. Let π∗(a|s) be the policy obtained in Equation 5. Then, it is a ζ-safe policy improvement over π̂β in the actual MDP M, i.e., J(π∗,M) ≥ J(π̂β ,M) − ζ with high probability 1- δ, where ζ is given by: ζ = 2( Cr,δ 1−γ + γRmaxCT,δ (1−γ)2 )Es∼dπM̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s)πβ(a|s) )]− (J(π ∗, M̂)− J(π̂β , M̂))︸ ︷︷ ︸ ≥α 11−γ Es∼dπ M̂ (s)[ d(s) du(s) −1] . (6) 4 METHODOLOGY In this section, we propose a practical Actor-Critic method for computing conservative value estimation function by approximately solving Equation 3 and taking advantage weighted policy updates. It is mainly motivated by the theoretic results, as explained by the four remarks in section 3.1. Besides, the full algorithm of deep learning implementation is presented in Appendix B. 4.1 CONSERVATIVE VALUE ESTIMATION Given the access to a dataset D collected by some behaviour policy πβ , our aim is to estimate the value function V π for a target policy π. As stated in section 3, to prevent the value overestimation, we instead learn a conservative value function V̂ π that lower bounds the real values of π by adding a penalty on out-of-distribution states into the flow of Bellman projections. Our method consists of the following iterative updates of Equations 7- 9, where Q̂k is the target network of Q̂k. V̂ k+1 ← argmin V LπV (V ; Q̂ k) = α ( Es∼D,a∼π(·|s),s′∼P̂ (s,a)[V (s ′)]− Es∼D[V (s)] ) + Es∼D [ (Ea∼π(·|s)[Q̂k(s, a)]− V (s))2 ] (7) Q̂k+1 ← argmin Q LπQ(Q; V̂ k+1) = Es,a,s′∼D [( r(s, a) + γV̂ k+1(s′)−Q(s, a) )2] (8) Q̂k+1 ← ωQ̂k + (1− ω)Q̂k+1 (9) The RHS of Eq. 7 is an approximation of Eq. 3, where the first term gives out-of-distribution states a penalty, and the second term follows the definition of V values and Q values. In Eq. 8, the RHS is TD errors estimated on transitions in the dataset D. Note that the target term here uses the sum of the immediate reward r(s, a) and the next step state’s value V̂ k+1(s′). In Eq. 9, the target Q values are updated with a soft interpolation factor ω ∈ (0, 1). Q̂k changes slower than Q̂k, which makes the TD error estimation in Eq. 7 more stable. Constrained policy. Note that in RHS of Eq. 7, we use a ∼ π(·|s) in expectation. To safely estimate the target value of V (s) by Ea∼π(·|s)[Q̂(s, a)], almost always requires supp(π(·|s)) ⊂ supp(πβ(·|s)). We achieves this by the advantage weighted policy update, which forces π(·|s) have significant probability mass on actions taken by πβ in data, as detailed in section 3.2. Model-based OOD state sampling. In Eq. 7, we implement the state sampling process s′ ∼ d in Eq. 3 as a flow of {s ∼ D; a ∼ π(a|s); s′ ∼ P̂ (s′|s, a)}, that is the distribution of the predictive next-states from D by following π. It is beneficial in practice. On one hand, this method is efficient to sample only the states that are approximately reachable fromD by one step, rather than to sample the whole state space. On the other hand, we only need the model to do one-step prediction such that no bootstrapped errors due to long horizon are introduced. Following previous work (Janner et al., 2019; Yu et al., 2020; 2021), we implement the probabilistic dynamics model using an ensemble of deep neural networks {pθ1, . . . , pθB}. Each neural network produces a Gaussian distribution over the next state and reward: P iθ(st+1, r|st, at) = N (uiθ(st, at), σiθ(st, at)). Adaptive penalty factor α. The pessimism level is controlled by the parameter α ≥ 0. In practice, we set α adaptive during training as follows, which is similar as that in CQL(Kumar et al. (2020)) max α≥0 [α(Es′∼d[Vψ(s′)]− Es∼D[Vψ(s)]− τ)] (10) , where τ is a budget parameter. If the expected difference in V-values is less than τ , α will decrease. Otherwise, α will increase and penalize the out of distribution state values more aggressively. Discussion: As stated in former sections, our method focuses on estimating conservative state value for learning a policy. The effectiveness of adding conservatism on V function are two folds. First, penalizing V values is with a smaller hypothesis space than penalizing Q, which would reduce the computation complexity. Second, penalizing V values can achieve a more relaxed lower bound than penalizing Q with ignoring the explicitly marginalization on Q values. A more relaxed lower bound guarantees more opportunities on achieving better policy. 4.2 ADVANTAGE WEIGHTED POLICY UPDATES After learning the conservative V̂ k+1 and Q̂k+1 (or V̂ π and Q̂π when converged), we improve the policy by the following advantage weighted policy update (Nair et al., 2020). π ← argmin π′ Lπ(π ′) = −Es,a∼D [ log π′(a|s) exp ( βÂk+1(s, a) )] where Âk+1(s, a) = Q̂k+1(s, a)− V̂ k+1(s). (11) Eq. 11 updates the policy π to amounts of weighted maximum likelihood which are computed by re-weighting state-action samples in D with estimated advantage Âk+1. As discussed in the AWAC (Nair et al., 2020), this method avoids explicitly estimating the behaviour policy and its resulted sampling errors which is an import issue in the offline RL setting (Kumar et al., 2020). Implicit policy constraints. We adopt the advantage weighted policy updates which imposes an implicit KL divergence constraints between π and πβ . This policy constraint is necessary to guarantee that the next state s′ in Equation 7 can be safely generated through policy π. As derived in Nair et al. (2020) (Appendix A), the Eq. 11 is an parametric solution of the following problem: max π′ Ea∼π′(·|s)[Âk+1(s, a)] s.t. DKL(π′(·|s) ∥ πβ(·|s)) ≤ ϵ, ∫ a π′(a|s)da = 1. Note that DKL(π′ ∥ πβ) is an reserve KL divergence with respect to π′, which is mode-seeking ((Shlens, 2014)). When treated as Lagrangian it forces π′ allocate its probability mass to the maximum likelihood supports of πβ , re-weighted by the estimated advantage. In other words, for the space of A where πβ(·|s) has no samples, π′(·|s) has almost zero probability mass too. Bonus of Exploration on Near States. As suggested by remarks in Section 3.1, in practice allowing the policy explore the predicated next states transition (s ∼ D) following a ∼ π′(·|s)) leads to better test performance. With this kind of exploration, the policy is updated as follows. π ← argmin π′ L+π (π ′) = Lπ(π ′)− λEs∼D,a∼π′(s),s′∼P̂ (s,a) [ r(s, a) + V̂ k+1(s′) ] (12) The second term is an approximation to Es∼dπ(s)[V π(s)], while the first term is the approximation ofEs∼du(s)[V π(s)]. While the choice of λ is ultimately just a hyper-parameter, we balance between optimistic policy optimization (in maximizing V) and constrained policy update (the first term) by adjusting λ. 5 EXPERIMENTS The primary goal of this section is to investigate whether the proposed tighter conservative value estimation leads to performance improvement. Besides, we would like to ascertain when further exploration has benefits and how well CSVE performs compared with SOTA algorithms. We evaluate our method on classical continuous control tasks of Gym(Brockman et al., 2016) and Adroit(Rajeswaran et al., 2017) in the standard D4RL (Fu et al. (2020)) benchmark. The Gym control tasks include HalfCHeetah, Hopper and Walker2D, each with 5 datasets collected by following different types of policies (random, medium, medium-replay, medium-expert, and expert). The Adroid tasks include Pen, Hammer, Door and Relocate, each with 3 dataset collected by different policies (human, cloned, and expert). Our method, namely CSVE, and the compared baselines are CQL(Kumar et al., 2020), COMBO(Yu et al., 2021), AWACNair et al. (2020), PBRL(Bai et al., 2021) and other SOTA algorithms TD3BC(Fujimoto & Gu, 2021), UWAC(Wu et al., 2021), IQL(Kostrikov et al., 2021b), BEAR(Kumar et al., 2019)) whose performance results are public or have high-quality open implementations. CQL which estimates the conservative Q values on state-action pairs rather than states, is the direct comparing method to ours. COMBO also lower bounds the estimated V function. AWAC is one special case of our Eq. 3 when d = du. PBRL is a very strong performant in offline RL, but is quite costly on computation since it uses the ensemble of hundreds of sub-models. 5.1 OVERALL PERFORMANCE We first test on the Gym control tasks. We train our methods for 1 million steps and report the final evaluation performance. The overall results are shown in Table 1. Compared to CQL, our method has better performance on 11 of 15 tasks and similar performance on others. In particular, our method shows consistent advantage on the datasets that generated by following random or suboptimal policies (random and medium). Compared to AWAC, our method has better performance on 9 of 15 tasks and comparable performance on others, which demonstrates the effect of our further exploration beyond cloning the behaviour policy. In particular, our method shows an obvious Table 4 in Bai et al. (2021). advantage in extrating the best policy on data of mixed policy (Medium Expert) while AWAC can not. Compared to COMBO, our method has better performance on 6 out 12 tasks and comparable performance or slightly worse on others, which demonstrates the effect of our better bounds on V. In particular, our method shows an obvious advantage in extrating the best policy on medium and medium-expert tasks. In 9 tasks evaluated, our method gets higher score than IQL in 7 of them, and has similar performance in the other tasks. Finally, our method performs close to PBRL, even PBRL has almost orders of more model capacity and computation cost. We now evaluate our method on the Adroit tasks. For CSVE, we report the final evaluation results after training in 0.1 million steps. The full results are reported in Table2. Copared to IQL, our method performs better in 8 out of 12 tasks, and performs similarly in the other 4 tasks. For the expert datasets, all methods including simple BC (behaviour cloning) can perform well, among which ours is the most competitive on all four tasks. For human and cloned datasets, almost all methods fail to learn effective policies on three tasks except the Pen task. For the Pen task, CSVE is the only one that succeeds to learn a good policy on the human dataset, while it can learn a medium policy on the cloned dataset as BC and PBRL. 5.2 SENSITIVENESS OF HYPER-PARAMETERS We anaylyze hyper-parameter β, which trades off between behaviour cloning and policy optimization. For smaller values, the objective behaves similarly to behavior cloning (weights are close for all actions), while for larger values, it attempts to recover the maximum of the Q-function. To quantitatively analyze its effect, we test different β from {0.1, 3, 10} in mujoco tasks with the medium-type datasets, whose results are shown in Fig. 1. We can see that λ has effect on the policy performance during training. Empirically, we found out that in general, β = 3.0 is suitable for such medium type datasets. Besides, in practice, by default we use β = 3.0 for random and medium task while 0.1 for medium-replay, medium-expert and expert datasets. 6 RELATED WORK Offline RL (Fujimoto et al., 2019; Levine et al., 2020) aims to learn a reasonable policy from a static dataset collected by arbitrary policies, without further interactions with the environment. Compared to interactive RL, offline RL suffers two critical inherent issues, i.e., the distribution drift introduced by off-policy learning and the out-of-distribution extrapolation in value estimation (Ostrovski et al., 2021; Levine et al., 2020). The common mind of offline RL algorithms is to incorporate conservatism or regularization into the online RL algorithms. Here we briefly review the prior work with a comparison to ours. Conservative value estimation: Prior offline RL algorithms regularize the learning policy close to the data or explicitly estimated behaviour policy) and penalize the exploration to the out-ofdistribution region, via distribution correction estimation (Dai et al., 2020; Yang et al., 2020), policy constraints with support matching (Wu et al., 2019) and distributional matching Fujimoto et al. (2019); Kumar et al. (2019), applying policy divergence based penalty on Q-functions (Kostrikov et al., 2021a; Wang et al., 2020) or uncertainty-based penalty (Agarwal et al., 2020) on Q-functions and conservative Q-function estimation (Kumar et al., 2020). Besides, model-based algorithms (Yu et al., 2020) directly estimate dynamics uncertainty and translated it into reward penalty. Different from these prior work that imposes conservatism on state-action pairs or actions, ours directly does such conservative estimation on states and requires no explicit uncertainty quantification. With learned conservative value estimation, an offline policy can be learned via implicit derivation from a state-action joint distribution or in Q-Learning and actor-critic framework. In this paper, our implementation adopts the method proposed in AWAC (Nair et al., 2020; Peng et al., 2019a). Model-based algorithms: Model-based offline RL learns the dynamics model from the static dataset and uses it to quantify uncertainty (Yu et al., 2020), data augmentention (Yu et al., 2021) with roll-outs, or planning (Kidambi et al., 2020; Chen et al., 2021). Such methods typically rely on wide data coverage when planning and data augmentation with roll-outs, and low model estimation error when estimating uncertainty, which is often difficult to satisfy in reality and leads to policy instability. Instead, we use the model to sample the next-step states only reachable from data, which has no such strict requirements on data coverage or model bias. Theoretical results: Our theoretical results are derived from conservative Q-value estimation (CQL) and safe policy improvement (Laroche et al., 2019). Besides, COMBO (Yu et al., 2021) gives a result of conservative but tighter value estimation than CQL, when dataset is augmented with model-based roll-outs. Compared to our result, COMBO’s lower bounds additionally assume same initial state distribution which may not always satisfy in continuous control. 7 DISCUSSION In this paper, we propose a new approach for offline RL based on conservative value estimation on states and discussed how the theoretical results could lead to the new RL algorithms. In particular, we developed a practical actor-critic algorithm, in which the critic does conservative state value estimation by incorporating the penalty of the model predictive next-states into Bellman iterations, and the actor does the advantage weighted policy updates with a bonus of exploring states with conservative values. Experimental evaluation shows that our method performs better than alternative methods based on conservative Q-function estimation and is competitive among the SOTA methods, confirming our theoretical analysis well. Moving forward, we hope to explore the design of more powerful algorithms based on conservative state value estimation. A PROOFS We first redefine notation for clarity and then provide the proofs of the results in the main paper. Notation. Let k ∈ N denote an iteration of policy evaluation(in Section 3.2). V k denotes the true, tabular (or functional) V-function iterate in the MDP, without any correction. V̂ k denotes the approximate tabular (or functional) V-function iterate. The empirical Bellman operator can be expressed as follows: (B̂πV̂ k)(s) = Ea∼π(a|s)r̂(s, a) + γ ∑ s′ Ea∼π(a|s)P̂ (s ′|s, a)[V̂ k(s′)] (13) where r̂(s, a) is the empirical average reward obtained in the dataset when performing action a at state s . The true Bellman operator can be expressed as follows: (BπV k)(s) = Ea∼π(a|s)r(s, a) + γ ∑ s′ Ea∼π(a|s)P (s ′|s, a)[V k(s′)] (14) Now we first prove that the iteration in Eq.3 has a fixed point. Assume state value function is lower bounded, i.e., V (s) ≥ C ∀s ∈ S, then Eq.3 can always be solved with Eq.4. Thus, we only need to investigate the iteration in Eq.4. Denote the iteration as a function operator T π on V . Suppose supp d ⊆ supp du, then the operator T π is a γ-contraction in L∞ norm where γ is the discounting factor. Proof of Lemma 1: Let V and V ′ are any two state value functions with the same support, i.e., suppV = suppV ′. |(T πV − T πV ′)(s)| = ∣∣∣∣(B̂πV (s)− α[ d(s)du(s) − 1])− (B̂πV ′(s)− α[ d(s)du(s) − 1]) ∣∣∣∣ = ∣∣∣B̂πV (s)− B̂πV ′(s)∣∣∣ =|(Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V (s′)) − (Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V ′(s′))| =γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ||T πV − T πV ′||∞ =max s |(T πV − T πV ′)(s)| =max s γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ≤γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)max s′′ |V (s′′)− V ′(s′′)| =γmax s′′ |V (s′′)− V ′(s′′)| =γ||(V − V ′)||∞ We present the bound on using empirical Bellman operator compared to the true Bellman operator. Following previous work Kumar et al. (2020), we make the following assumptions that: Pπ is the transition matrix coupled with policy, specifically, PπV (s) = Ea′∼π(a′|s′),s′∼P (s′|s,a′)[V (s′)] Assumption 1. ∀s, a ∈ M, the following relationships hold with at least (1 − δ) (δ ∈ (0, 1)) probability, |r − r(s, a)| ≤ Cr,δ√ |D(s, a)| , ||P̂ (s′|s, a)− P (s′|s, a)||1 ≤ CP,δ√ |D(s, a)| (15) Under this assumption, the absolute difference between the empirical Bellman operator and the actual one can be calculated as follows: |(B̂π)V̂ k − (Bπ)V̂ k)| = Ea∼π(a|s)|r − r(s, a) + γ ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a)− P (s′|s, a))[V̂ k(s′)]| (16) ≤ Ea∼π(a|s)|r − r(s, a)|+ γ| ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a′)− P (s′|s, a′))[V̂ k(s′)]| (17) ≤ Ea∼π(a|s) Cr,δ + γCP,δ2Rmax/(1− γ)√ |D(s, a)| (18) Thus, the estimation error due to sampling error can be bounded by a constant as a function of Cr,δ and Ct,δ . We define this constant as Cr,T,δ . Thus we obtain: ∀V, s ∈ D, |B̂πV (s)− BπV (s)| ≤ Ea∼π(a|s) Cr,t,δ (1− γ) √ |D(s, a)| (19) Next we provide an important lemma. Lemma 2. (Interpolation Lemma) For any f ∈ [0, 1], and any given distribution ρ(s), let df be an f-interpolation of ρ and D, i.e.,df (s) := fd(s) + (1 − f)ρ(s), let v(ρ, f) := Es∼ρ(s)[ρ(s)−d(s)df (s) ], then v(ρ, f) satisfies v(ρ, f) ≥ 0. The proof can be found in Yu et al. (2021). By setting f as 1, we have Es∼ρ(s)[ ρ(s)−d(s) d(s) ] > 0. Proof of Theorem 1: The V function of approximate dynamic programming in iteration k can be obtained as: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1] ∀s, k (20) The fixed point: V̂ π(s) = B̂πV̂ π(s)− α[ d(s) du(s) − 1] ≤ BπV̂ π(s) + Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α[ d(s) du(s) − 1] (21) Thus we obtain: V̂ π(s) ≤ V π(s) + (I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α(I − γPπ)−1[ d(s) du(s) − 1] (22) , where Pπ is the transition matrix coupled with the policy π and PπV (s) = Ea′∼π(a′|s′)s′∼P (s′|s,a′)[V (s ′)]. Then the expectation of V π(s) under distribution d(s) satisfies: Es∼d(s)V̂ π(s) ≤Es∼d(s)(V π(s)) + Es∼d(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| −αEs∼d(s)(I − γPπ)−1[ d(s) du(s) − 1])︸ ︷︷ ︸ >0 (23) When α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| Es∼d(s)[ d(s) du(s) −1]) , Es∼d(s)V̂ π(s) ≤ Es∼d(s)(V π(s)). Proof of Theorem 2: The expectation of V π(s) under distribution d(s) satisfies: Es∼du(s)V̂ π(s) ≤Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − αEs∼du(s)(I − γP π)−1[ d(s) du(s) − 1]) (24) Noticed that the last term:∑ s∼du(s) ( df (s) du(s) − 1) = ∑ s du(s)( df (s) du(s) − 1) = ∑ s df (s)− ∑ s du(s) = 0 (25) We obtain that: Es∼du(s)V̂ π(s) ≤ Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| (26) Proof of Theorem 3: Recall that the expression of the V-function iterate is given by: V̂ k+1(s) = Bπ k V̂ k(s)− α[ d(s) du(s) − 1]∀s, k (27) Now the expectation of V π(s) under distribution du(s) is given by: Es∼du(s)V̂ k+1(s) = Es∼du(s) [ Bπ k V̂ k(s)− α[ d(s) du(s) − 1] ] = Es∼du(s)B πk V̂ k(s) (28) The expectation of V π(s) under distribution d(s) is given by: Es∼d(s)V̂ k+1(s) = Es∼d(s)Bπ k V̂ k(s)−α[ d(s) du(s) −1] = Es∼d(s)Bπ k V̂ k(s)−αEs∼d(s)[ d(s) du(s) −1] (29) Thus we can show that: Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) = Es∼du(s)B πk V̂ k(s)− Es∼d(s)Bπ k V̂ k(s) + αEs∼d(s)[ d(s) du(s) − 1] = Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s)− Es∼d(s)[Bπ k (V̂ k − V k)] + Es∼du(s)[B πk(V̂ k − V k)] + αEs∼d(s)[ d(s) du(s) − 1] (30) By choosing α: α > Es∼d(s)[Bπ k (V̂ k − V k)]− Es∼du(s)[Bπ k (V̂ k − V k)] Es∼d(s)[ d(s) du(s) − 1] (31) We have Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) > Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s) hold. Proof of Theorem 4: V̂ is obtained by solving the recursive Bellman fixed point equation in the empirical MDP, with an altered reward, r(s, a) − α[ d(s)du(s) − 1], hence the optimal policy π ∗(a|s) obtained by optimizing the value under Eq. 4. Proof of Theorem 5: The proof of this statement is divided into two parts. We first relates the return of π∗ in the empirical MDP M̂ with the return of πβ , we can get: J(π∗, M̂)− α 1 1− γ Es∼dπ∗ M̂ (s)[ d(s) du(s) − 1] ≥ J(πβ , M̂)− 0 = J(πβ , M̂) (32) The next step is to bound the difference between J(πβ , M̂) and J(πβ ,M) and the difference between J(π∗, M̂) and J(π∗,M). We quote a useful lemma from Kumar et al. (2020) (Lemma D.4.1): Lemma 3. For any MDPM , an empirical MDP M̂ generated by sampling actions according to the behavior policy πβ and a given policy π, |J(π, M̂)−J(π,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s) πβ(a|s) )] (33) Setting π in the above lemma as πβ , we get: |J(πβ , M̂)− J(πβ ,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] (34) , given that √ Ea∼π∗(a|s)[ π∗(a|s) πβ(a|s) ] is a pointwise upper bound of √ Ea∼πβ(a|s)[ πβ(a|s) πβ(a|s) ](Kumar et al. (2020)). Thus we get, J(π∗, M̂) ≥ J(πβ , M̂)− 2( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] + α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (35) , which completes the proof. Here, the second term is sampling error which occurs due to mismatch of M̂ and M ; the third term denotes the increase in policy performance due to CSVE in M̂ . Note that when the first term is small, the smaller value of α are able to provide an improvement compared to the behavior policy. B CSVE ALGORITHM Now we put all in section 4 together and describe the practical deep offline reinforcement learning algorithm. In particular, the dynamic model model, value functions and policy are all parameterized with deep neural networks and trained via stochastic gradient decent methods. The pseudo code is given in Alg. 1. Algorithm 1: CSVE Algorithm Input : Data D = {(s, a, r, s′)} Parameters: Qθ, Vψ , πϕ, Qθ, Mν Hyperparameters: α, λ, learning rates ηθ, ηψ, ηϕ, ω begin // Train transition model with the static dataset D 1 Mν ← train(D); // Train the conservative value and policy functions 2 Initialize function parameters θ0, ψ0, ϕ0, θ0 = θ0; 3 foreach step k = 1→ N do 4 ψk ← ψk−1 − ηψ∇ψLπV (Vψ; Q̂θk); 5 θk ← θk−1 − ηθ∇θLπQ(Qθ; V̂ψk); 6 ϕk ← ϕk−1 − ηϕ∇ϕL+π (πϕ); 7 θk ← ωθk−1 + (1− ω)θk; C IMPLEMENTATION DETAIL We implement our method based on an offline deep reinforcement learning library d3rlpy (Seno & Imai, 2021). The code is available at https://github.com/iclr20234089/code4098. The detailed hyper-parameters are provided in Table 3 D EXTENDED EXPERIMENTAL RESULTS D.1 MORE EXPERIMENTS ON HYPER-PARAMETERS EFFECT We also investigated λ values of {0.0, 0.1, 0.5, 1.0} in the medium tasks. The results are shown in Fig. 4. D.2 COMPARISON WITH PESSIMISM ON Q We implement an ablation version of our method–penalty-Q, which directlly penalize the value of state action pairs. Specifically, we change the critic loss function into : Q̂k+1 ← argmin Q LπQ(Q; Q̂ k) = α ( Es∼D,a′∼π(·|s)[Q(s, a′)]− Es,a∼D[Q(s, a)] ) + Es,a,s′∼D [( r(s, a) + γQ̂k+1(s′, a′)−Q(s, a) )2] (36) We use the same policy extraction method and test this method on the medium-task, in which the data is collected using a medium-performed policy. In all the three tasks, the performance of the penalty-Q are worse than the the original implementation, the penalty-V counterpart. When penalty is on the state-action pair, as illustrated by our theoretical discussion, the value of the evaluated Q value tends to pointwise lower bounds the true Q value, which results in a more conservative and thus worse policy. While when we penalize V, the estimated value function only bounds the expectation of the true V function, which results in a more flexible and well-performed policy. D.3 RELATIONSHIP BETWEEN MODEL BIAS AND FINAL PERFORMANCE As stated in the main paper, compared to normal model-based offline RL algorithms, CSVE is insensitive to model biases. To understand this quantitatively, now we investigate the effect of model biases to the performance. We use the the dynamic model’s average L2 error on transition prediction as the surrogate of model biases. As shown in Fig. 4, in CSVE, the model bias has very little effect to RL performance. Particularly, for halfcheetah there is observed effect of model errors to scores, while in hopper and walker2d with increasing errors, the scores have a slight downward trend where the decrease is relatively very small. D.4 REPRODUCTION OF COMBO In the main body of this paper, our results of COMBO adopt the results presented in literature (Rigter et al., 2022). Our goal here is to look into more details of COMBO’s asymptotic performance evaluated during training. For comparison fairness, we adopt the official COMBO code provided by author, and rerun to evaluate with the medium dataset of D4RL mujoco v2. Fig. 5 shows the asymptotic performance until 1000 epochs, in which the scores have been normalized with corresponding SAC performance. We found that in both hopper and walker2d, the scores show dramatic fluctuations. The average scores of last 10 epochs for halfcheetah, hopper and walker2d are 71.7, 65.3 and -0.26 in respect. Besides, we found even in D4RL v0 dataset, COMBO’s behaviours are similar. 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e halfcheetah_v2: Score of Return Average 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e hopper_v2: Score of Return Average 0 200 400 600 800 1000 2 0 2 4 6 8 10 Sc or e walker2d_v2: Score of Return Average Figure 5: Return of COMBO on D4RL mujoco v2 tasks D.5 EFFECT OF EXPLORATION NEAR DATASET DISTRIBUTIONS As discussed in Section 3.1 and 4.2, proper choice of exploration on the distribution (d) beyond data (du) should help policy improvement. The factor λ in Eq. 12 controls the trade-off on such ’bonus’ exploration and complying the data-implied behaviour policy. Let us take the medium-replay type of datasets to analyze its effect. In the experiments, with fixed β = 0.1, we investigate λ values of {0.0, 0.5, 1.0, 3.0}. As shown in the upper figures in Fig. 6, λ shows obvious effect to policy performance and variances during training. In general, there is a value under which increasing λ leads to performance improvement, while above which further increasing λ hurts performance. For example, with λ = 3.0 in hopper-medium-replay task and walker2d-medium-replay task, the performance get worse than with smaller λ values. The value of λ is task-specific, and we find that its effect is highly related to the loss in Eq. 11 which can be observed by comparing bottom and upper figures in Fig. 6. Thus, in practice, we can choose proper λ according to the above loss without online interaction.
1. What is the focus and contribution of the paper regarding offline RL? 2. What are the strengths and weaknesses of the proposed method, particularly in its design decisions and comparisons with other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the method's ability to penalize out-of-distribution states? 5. Can the author provide more explanations or ablation studies to justify their design choices?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a CQL-like method for penalizing out-of-distribution states in offline RL. The method first describes an approach for off-policy evaluation with OOD state penalization, and then incorporates into the training pipeline for Q-functions, and then uses it for offline RL. Strengths And Weaknesses Strength: The method is clearly intuitive and a good idea to do, so that's a plus. Weaknesses: Many design decisions, no clear explanation: There are many design decisions in the paper that have not been ablated or explained in more detail -- for example, why AWR, and not just standard SAC/CQL-like policy extraction? why learn separate V(s) and Q(s, a), and not just directly penalize Q(s, a) on OOD states? Missing comparisons: the method uses a dynamics model to obtain new states, yet the method doesn't compare to COMBO? Seeing the resutls, seems like COMBO would perform better than this, which means that the results are not significant... Novelty: It seems like if the Q(s, a) were directly penalized on OOD states, and policy actions, the method in this paper would just be COMBO identically, and indeed COMBO is just a more generalization of this approach. So, unless the choice of V(s) can be justified rigorously, I don't think this paper provides a novel method. Clarity, Quality, Novelty And Reproducibility Overall, the paper is clear, but there are quite a few typos, etc, which can be fixed. Novelty: I don't think the paper is novel. I think that the method in the paper is of high quality, but it already exists in prior work if I am understanding properly.
ICLR
Title Effective Offline Reinforcement Learning via Conservative State Value Estimation Abstract Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. N/A Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. 1 INTRODUCTION Reinforcement Learning (RL), which learns to act by interacting with the environment, has achieved remarkable success in various tasks. However, in most real applications, it is impossible to learn online from scratch as exploration is often risky and unsafe. Instead, offline RL((Fujimoto et al., 2019; Lange et al., 2012)) avoids this problem by learning the policy solely from historical data. However, the naive approach, which directly uses online RL algorithms to learn from a static dataset, suffers from the problems of value over-estimation and policy extrapolation on OOD (out-of-distribution) states or actions. Recently, conservative value estimation, being conservative on states and actions where there are no enough samples, has been put forward as a principle to effectively solve offline RL ((Shi et al., 2022; Kumar et al., 2020; Buckman et al., 2020). Prior methods, e.g., Conservative Q-Learning (CQL Kumar et al. (2020)), avoid the value over-estimation problem by systematically underestimating the Q values of OOD actions on the states in the dataset. In practice, it is often too pessimistic and thus leads to overly conservative algorithms. COMBO (Yu et al., 2021) leverages a learnt dynamic model to augment data in an interpolation way, and then learn a Q function that is less conservative than CQL and derives a better policy in potential. In this paper, we propose CSVE(Conservative State Value Estimation), a new offline RL approach. Unlike the above traditional methods that estimate conservative values by penalizing Q-function on OOD states or actions, CSVE directly penalizing the V-function on OOD states. We prove in theory that CSVE has tighter bounds on true state values than CQL, and same bounds as COMBO but under more general discounted state distributions which leads to more space for algorithm design. Our main contributions are as follows. • The conservative state value estimation with related theoretical analysis. We prove that it lower bounds the real state values in expectation over any state distribution that is used to sample OOD states, and is up-bounded by the real values in expectation over the marginal state distribution of the dataset plus a constant term depending on only sampling errors. Compared to prior work, it has several advantages to derive a better policy in potential. • A practical Actor-Critic implementation. It approximately estimates the conservative state values in the offline context and improves the policy via advantage weighting updates. In particular, we use a dynamics model to generalize over in-distribution space and sample OOD states that are directly reachable from the dataset. • Experimental evaluation on continuous control tasks of Gym (Brockman et al., 2016) and Adroit (Rajeswaran et al., 2017) in D4RL (Fu et al., 2020) benchmarks, showing that CSVE performs better than prior methods based on conservative Q-value estimation, and is strongly competitive among main SOTA offline RL algorithms. 2 PRELIMINARIES Offline Reinforcement Learning. Consider the Markov Decision Process M := (S,A, P, r, ρ, γ), which consists of the state space S, the action spaceA, the transition model P : S×A → ∆(S), the reward function r : S × A → R, the initial state distribution ρ and the discount factor γ ∈ (0, 1]. A stochastic policy π : S → ∆(A) takes an action in probability given the current state. A transition is the tuple (st, at, rt, st+1) where at ∼ π(·|st), st+1 ∼ P (·|st, at) and rt = r(st, at). We assume that the reward values satisfy |r(s, a)| ≤ Rmax,∀s, a. A trajectory under π is the random sequence τ = (s0, a0, r0, s1, a1, r1, . . . , sT ) which consists of continuous transitions starting from s0 ∼ ρ. The standard RL is to learn a policy π ∈ Π that maximize the future cumulative rewards Jπ(M) = EM,π[ ∑∞ t=0 γ trt] via active interaction with the environment M . At any time t, for the policy π, the value function of state is defined as V π(s) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s], and the Q value function is Qπ(s, a) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s, at = a]. The Bellman operator is a function projection: BπQ(s, a) := r(s, a) + γEs′∼P (·|s,a),a′∼π(·|s′)[Q(s′, a′)], or BπV (s) := Ea∼π(·|s)[r(s, a) + γEs′∼P (·|s,a)[V (s′)]], which leads to iterative value updates. Bellman consistency implies that V π(s) = BπV π(s),∀s and Qπ(s) = BπQπ(s, a),∀s, a. In practice with function approximation, we use the empirical Bellman operator B̂π where the former expectations are estimated with data. The offline RL is to learn the policy π from a static dataset D = {(s, a, r, s′)} consisting of transitions collected by any behaviour policy, aiming to behave well in the online environment. Note that, unlike the standard online RL, offline RL cannot interact with the environment during learning. Conservative Value Estimation. One main challenge in offline RL is the over-estimation of values introduced by extrapolation on unseen states and actions, which may make the learned policy collapse. To address this issue, conservatism or pessimism are used in value estimation, e.g. CQL learns a conservative Q-value function by penalizing the value of unseen actions on states: Q̂k+1 ← argmin Q α (Es∼D,a∼µ(a|s)[Q(s, a)]− Es∼D,a∼π̂β(a|s)[Q(s, a)]) + 1 2 Es,a,s′∼D[(Q(s, a)− β̂πQ̂k(s, a))2] (1) where π̂β and π are the behaviour policy and learnt policy separately, µ is any arbitrary policy different from π̂β , and α the factor for trade-off of conservatism. Constrained Policy Optimization. To address the issues of distribution drift between learning policy and behaviour policy, one approach is to constrain the learning policy close to the behaviour policy (Bai et al., 2021; Wu et al., 2019; Nair et al., 2020; Levine et al., 2020; Fujimoto et al., 2019). Here we take Advantage Weighted Regression(Peng et al. (2019b); Nair et al. (2020)) which adopts an implicit KL divergence to constrain the distance of policies as example: πk+1 ← argmax π Es,a∼D [ log π(a|s) 1 Z(s) exp ( 1 λ Aπ k (s, a) )] (2) where Aπ k is the advantage of policy πk, and Z the normalization constant for s. Model-based Offline RL. In RL, the model is an approximation of the MDP M . We denote a model as M̂ := (S,A, P̂ , r̂, ρ, γ), where P̂ and r̂ are approximations of P and r respectively. In the setting of offline RL, the model is used to roll out and augment data (Yu et al., 2020; 2021) or act as a surrogate of real environment to interact with agent (Kidambi et al., 2020). In this paper, we use model to sample the next states that are approximately reachable from the dataset. 3 CONSERVATIVE STATE VALUE ESTIMATION In the offline setting, the value overestimation is a major problem resulting in failure of learning a reasonable policy (Levine et al., 2020; Fujimoto et al., 2019). In contrast to prior works(Kumar et al., 2020; Yu et al., 2021) that get conservative value estimation via penalizing Q function for OOD state-action pairs , we directly penalize V function for OOD states. Our approach provides several novel theoretic results that allow better trade-off of conservative value estimation and policy improvement. All proofs of our theorems can be found in Appendix A. 3.1 CONSERVATIVE OFF-POLICY EVALUATION Our approach is an alternative approach to CQL(Kumar et al., 2020). Instead of learning a conservative Q function, we aim to conservatively estimate the value V π(s) of a target policy π given a dataset D to avoid overestimation of out-of-distribution states. To achieve this, we penalize the V-values evaluated on states that is more likely to be out-of-distribution and pushing up the V-values on states that is in the distribution of the dataset, which is achieved through the following iteration: V̂ k+1 ← argmin V 1 2 Es∼du(s)[(B̂πV̂ k(s)− V (s))2] + α(Es′∼d(s)V (s′)− Es∼du(s)V (s)) (3) where du(s) is the discounted state distribution of D, d(s) is any state distribution, and B̂π is the empirical Bellman operator (see appendix for more details). Considering the setting without function approximation, by setting the derivative of Eq. 3 as zero, the V function found by approximate dynamic programming in iteration k can be obtained: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1], ∀s, k. (4) Denote the function projection on V̂ k in Eq. 4 as T π . We have Lemma 1, and thus V̂ k converges to a unique fixed point. Lemma 1. For any d with supp d ⊆ supp du, T π is a γ-contraction in L∞ norm. Theorem 1. For any d with supp d ⊆ supp du (d ̸= du), with a sufficiently large α (i.e., α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| /Es∼d(s)[ d(s)du(s) − 1])), the expected value of the estimation V̂ π(s) under d(s) is the lower bound of the true value, that is: Es∼d(s)[V̂ π(s)] ≤ Es∼d(s)[V π(s)]. V̂ π(s) = limk→∞ V̂ k(s) is the converged value estimation with the datasetD, and Cr,t,δRmax (1−γ) √ |D(s,a)| is related to sampling error introduced by the use empirical rather than Bellman operator. If the counts of each state-action pair is greater than zero, |D(s, a)| denotes a vector of size |S||A| containing counts for each state-action pair. If the counts of this state action pair is zero, the corresponding 1√ |D(s,a)| is large but finite value. We assume that with probability ≥ 1 − δ, the sampling error is less than Cr,t,δRmax (1−γ) √ |D(s,a)| , while Cr,t,δ is a constant (See appendix for more details.) Note that if the sampling error is ignorable, α > 0 can guarantee the lower bound results. Theorem 2. The expected value of the estimation V̂ π(s) under the state distribution of the original dataset is the lower bound of the true value plus the term of irreducible sampling error, that is: Es∼du(s)[V̂ π(s)] ≤ Es∼du(s)[V π(s)] + Es∼du(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| . , where Pπ refers to the transition matrix coupled with policy π (see Appendix for details). Now we show that, during iterations, the gap between the value of in-distribution state and out-ofdistribution state in the estimated V-function is higher than in the true V-functions. Theorem 3. At any iteration k, with a large enough α, our method expands the difference in expected V-values under the chosen state distribution and the dataset state distribution, that is: Es∼du(s)[V̂ k(s)]− Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)]− Es∼d(s)[V k(s)]. In the policy extraction part, this property enables our policy to take actions a in state s(s ∼ D) that remains in distribution instead of out of distribution, given that our estimated V-function does not overestimate the erroneous out-of-distribution states compared to the in-distribution states. Now we present four remarks to explain how the above theorems guide applications of Eq. 3 in offline RL algorithms. Remark 1. In Eq. 3, if d = du, the penalty on out-of-distribution states degenerates, which means that the policy should not reach states with low support in data, and consequently never explore the unseen actions at the state. Indeed, AWAC Nair et al. (2020) adopts this setting. We show that with proper choice of d different from du, our method performs better than AWAC in practice. Remark 2. Theorem 2 implies that under du, the marginal state distribution of data, the expectation estimated value of π should either be lower than the true value, or higher than the true value but within a threshold. This fact motivates our advantage weighted policy update method in Eq. 11. Remark 3. Theorem 1implies that under d, say the discounted state distribution of any policy, the expectation estimated value of π should lower bounds the true value. This fact motivates our policy improvement method of unifying advantage weighted update with a bonus exploration in Eq. 12. Remark 4. Theorem 3 states Es∼d(s)[V k(s)] − Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)] − Es∼du(s)[V̂ k(s)]. That is to say, under the distribution d, the amount of value under-estimation in expectation is larger than that of the behaviour policy du. With proper choice of d, it is safe and effective to derive a new and potentially better policy with V̂ k. Our algorithm choose the distribution of model predictive next-states as d, i.e., s′ ∼ d is implemented by s ∼ D, a ∼ π(·|s), s′ ∼ P̂ (·|s, a), which effectively builds a soft ’river’ with low values around the dataset. Comparison with prior work: CQL (Eq.1), which penalizes Q-function of OOD actions on states in history data, guarantees the lower bounds on state-wise value estimation: V̂ π(s) = Eπ(a|s)(Q̂ π(s, a)) ≤ Eπ(a|s)(Qπ(s, a)) = V π(s) for all s ∈ D. COMBO, which penalizes Qfunction of OOD states and actions of an interpolation of history data and model-based roll-outs, guarantees the lower bound of state value expectation: Es∼µ0 [V̂ π(s)] ≤ Es∼µ0 [V π(s)] where µ0 is the initial state distribution (Remark 1, section A.2 of COMBO Yu et al. (2021)); which is a special case of our result in Theorem 1 when d = µ0. Although both CSVE and COMBO intend to get better performance by releasing conservative estimation guarantee from the state-wise values to expectation of state values, CSVE get the same lower bounds but under more general state distribution. This provide more flexible space for algorithm design, and it is also one main reason of penalizing on V rather than Q. By controlling distance of d to the behaviour policy’s discounted state distribution dβ , CSVE has the potential of more performance improvement. Note that bounding E[V [s]], rather than state-wise V (s), would introduce a more adventurous policy, which would achieves better performance in in-distribution states and have more risk behaviors in OOD states. To deal with that limitation, we introduce a deep ensemble dynamic model to sample the OOD states for better estimation. 3.2 SAFE POLICY IMPROVEMENT GUARANTEES Following prior works (Laroche et al. (2019); Kumar et al. (2020); Yu et al. (2021)), we show that our method has the safe policy improvement guarantees against the data-implied behaviour policy. We first show that our method optimizes a penalized RL empirical objective: Theorem 4. Let V̂ π be the fixed point of Equation 3, then π∗(a|s) = argmaxπ V̂ π(s) is equivalently obtained by solving: π∗(a|s)← argmax π J(π, M̂)− α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (5) Building upon Theorem 4, we show that our method provides a ζ-safe policy improvement over πβ Theorem 5. Let π∗(a|s) be the policy obtained in Equation 5. Then, it is a ζ-safe policy improvement over π̂β in the actual MDP M, i.e., J(π∗,M) ≥ J(π̂β ,M) − ζ with high probability 1- δ, where ζ is given by: ζ = 2( Cr,δ 1−γ + γRmaxCT,δ (1−γ)2 )Es∼dπM̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s)πβ(a|s) )]− (J(π ∗, M̂)− J(π̂β , M̂))︸ ︷︷ ︸ ≥α 11−γ Es∼dπ M̂ (s)[ d(s) du(s) −1] . (6) 4 METHODOLOGY In this section, we propose a practical Actor-Critic method for computing conservative value estimation function by approximately solving Equation 3 and taking advantage weighted policy updates. It is mainly motivated by the theoretic results, as explained by the four remarks in section 3.1. Besides, the full algorithm of deep learning implementation is presented in Appendix B. 4.1 CONSERVATIVE VALUE ESTIMATION Given the access to a dataset D collected by some behaviour policy πβ , our aim is to estimate the value function V π for a target policy π. As stated in section 3, to prevent the value overestimation, we instead learn a conservative value function V̂ π that lower bounds the real values of π by adding a penalty on out-of-distribution states into the flow of Bellman projections. Our method consists of the following iterative updates of Equations 7- 9, where Q̂k is the target network of Q̂k. V̂ k+1 ← argmin V LπV (V ; Q̂ k) = α ( Es∼D,a∼π(·|s),s′∼P̂ (s,a)[V (s ′)]− Es∼D[V (s)] ) + Es∼D [ (Ea∼π(·|s)[Q̂k(s, a)]− V (s))2 ] (7) Q̂k+1 ← argmin Q LπQ(Q; V̂ k+1) = Es,a,s′∼D [( r(s, a) + γV̂ k+1(s′)−Q(s, a) )2] (8) Q̂k+1 ← ωQ̂k + (1− ω)Q̂k+1 (9) The RHS of Eq. 7 is an approximation of Eq. 3, where the first term gives out-of-distribution states a penalty, and the second term follows the definition of V values and Q values. In Eq. 8, the RHS is TD errors estimated on transitions in the dataset D. Note that the target term here uses the sum of the immediate reward r(s, a) and the next step state’s value V̂ k+1(s′). In Eq. 9, the target Q values are updated with a soft interpolation factor ω ∈ (0, 1). Q̂k changes slower than Q̂k, which makes the TD error estimation in Eq. 7 more stable. Constrained policy. Note that in RHS of Eq. 7, we use a ∼ π(·|s) in expectation. To safely estimate the target value of V (s) by Ea∼π(·|s)[Q̂(s, a)], almost always requires supp(π(·|s)) ⊂ supp(πβ(·|s)). We achieves this by the advantage weighted policy update, which forces π(·|s) have significant probability mass on actions taken by πβ in data, as detailed in section 3.2. Model-based OOD state sampling. In Eq. 7, we implement the state sampling process s′ ∼ d in Eq. 3 as a flow of {s ∼ D; a ∼ π(a|s); s′ ∼ P̂ (s′|s, a)}, that is the distribution of the predictive next-states from D by following π. It is beneficial in practice. On one hand, this method is efficient to sample only the states that are approximately reachable fromD by one step, rather than to sample the whole state space. On the other hand, we only need the model to do one-step prediction such that no bootstrapped errors due to long horizon are introduced. Following previous work (Janner et al., 2019; Yu et al., 2020; 2021), we implement the probabilistic dynamics model using an ensemble of deep neural networks {pθ1, . . . , pθB}. Each neural network produces a Gaussian distribution over the next state and reward: P iθ(st+1, r|st, at) = N (uiθ(st, at), σiθ(st, at)). Adaptive penalty factor α. The pessimism level is controlled by the parameter α ≥ 0. In practice, we set α adaptive during training as follows, which is similar as that in CQL(Kumar et al. (2020)) max α≥0 [α(Es′∼d[Vψ(s′)]− Es∼D[Vψ(s)]− τ)] (10) , where τ is a budget parameter. If the expected difference in V-values is less than τ , α will decrease. Otherwise, α will increase and penalize the out of distribution state values more aggressively. Discussion: As stated in former sections, our method focuses on estimating conservative state value for learning a policy. The effectiveness of adding conservatism on V function are two folds. First, penalizing V values is with a smaller hypothesis space than penalizing Q, which would reduce the computation complexity. Second, penalizing V values can achieve a more relaxed lower bound than penalizing Q with ignoring the explicitly marginalization on Q values. A more relaxed lower bound guarantees more opportunities on achieving better policy. 4.2 ADVANTAGE WEIGHTED POLICY UPDATES After learning the conservative V̂ k+1 and Q̂k+1 (or V̂ π and Q̂π when converged), we improve the policy by the following advantage weighted policy update (Nair et al., 2020). π ← argmin π′ Lπ(π ′) = −Es,a∼D [ log π′(a|s) exp ( βÂk+1(s, a) )] where Âk+1(s, a) = Q̂k+1(s, a)− V̂ k+1(s). (11) Eq. 11 updates the policy π to amounts of weighted maximum likelihood which are computed by re-weighting state-action samples in D with estimated advantage Âk+1. As discussed in the AWAC (Nair et al., 2020), this method avoids explicitly estimating the behaviour policy and its resulted sampling errors which is an import issue in the offline RL setting (Kumar et al., 2020). Implicit policy constraints. We adopt the advantage weighted policy updates which imposes an implicit KL divergence constraints between π and πβ . This policy constraint is necessary to guarantee that the next state s′ in Equation 7 can be safely generated through policy π. As derived in Nair et al. (2020) (Appendix A), the Eq. 11 is an parametric solution of the following problem: max π′ Ea∼π′(·|s)[Âk+1(s, a)] s.t. DKL(π′(·|s) ∥ πβ(·|s)) ≤ ϵ, ∫ a π′(a|s)da = 1. Note that DKL(π′ ∥ πβ) is an reserve KL divergence with respect to π′, which is mode-seeking ((Shlens, 2014)). When treated as Lagrangian it forces π′ allocate its probability mass to the maximum likelihood supports of πβ , re-weighted by the estimated advantage. In other words, for the space of A where πβ(·|s) has no samples, π′(·|s) has almost zero probability mass too. Bonus of Exploration on Near States. As suggested by remarks in Section 3.1, in practice allowing the policy explore the predicated next states transition (s ∼ D) following a ∼ π′(·|s)) leads to better test performance. With this kind of exploration, the policy is updated as follows. π ← argmin π′ L+π (π ′) = Lπ(π ′)− λEs∼D,a∼π′(s),s′∼P̂ (s,a) [ r(s, a) + V̂ k+1(s′) ] (12) The second term is an approximation to Es∼dπ(s)[V π(s)], while the first term is the approximation ofEs∼du(s)[V π(s)]. While the choice of λ is ultimately just a hyper-parameter, we balance between optimistic policy optimization (in maximizing V) and constrained policy update (the first term) by adjusting λ. 5 EXPERIMENTS The primary goal of this section is to investigate whether the proposed tighter conservative value estimation leads to performance improvement. Besides, we would like to ascertain when further exploration has benefits and how well CSVE performs compared with SOTA algorithms. We evaluate our method on classical continuous control tasks of Gym(Brockman et al., 2016) and Adroit(Rajeswaran et al., 2017) in the standard D4RL (Fu et al. (2020)) benchmark. The Gym control tasks include HalfCHeetah, Hopper and Walker2D, each with 5 datasets collected by following different types of policies (random, medium, medium-replay, medium-expert, and expert). The Adroid tasks include Pen, Hammer, Door and Relocate, each with 3 dataset collected by different policies (human, cloned, and expert). Our method, namely CSVE, and the compared baselines are CQL(Kumar et al., 2020), COMBO(Yu et al., 2021), AWACNair et al. (2020), PBRL(Bai et al., 2021) and other SOTA algorithms TD3BC(Fujimoto & Gu, 2021), UWAC(Wu et al., 2021), IQL(Kostrikov et al., 2021b), BEAR(Kumar et al., 2019)) whose performance results are public or have high-quality open implementations. CQL which estimates the conservative Q values on state-action pairs rather than states, is the direct comparing method to ours. COMBO also lower bounds the estimated V function. AWAC is one special case of our Eq. 3 when d = du. PBRL is a very strong performant in offline RL, but is quite costly on computation since it uses the ensemble of hundreds of sub-models. 5.1 OVERALL PERFORMANCE We first test on the Gym control tasks. We train our methods for 1 million steps and report the final evaluation performance. The overall results are shown in Table 1. Compared to CQL, our method has better performance on 11 of 15 tasks and similar performance on others. In particular, our method shows consistent advantage on the datasets that generated by following random or suboptimal policies (random and medium). Compared to AWAC, our method has better performance on 9 of 15 tasks and comparable performance on others, which demonstrates the effect of our further exploration beyond cloning the behaviour policy. In particular, our method shows an obvious Table 4 in Bai et al. (2021). advantage in extrating the best policy on data of mixed policy (Medium Expert) while AWAC can not. Compared to COMBO, our method has better performance on 6 out 12 tasks and comparable performance or slightly worse on others, which demonstrates the effect of our better bounds on V. In particular, our method shows an obvious advantage in extrating the best policy on medium and medium-expert tasks. In 9 tasks evaluated, our method gets higher score than IQL in 7 of them, and has similar performance in the other tasks. Finally, our method performs close to PBRL, even PBRL has almost orders of more model capacity and computation cost. We now evaluate our method on the Adroit tasks. For CSVE, we report the final evaluation results after training in 0.1 million steps. The full results are reported in Table2. Copared to IQL, our method performs better in 8 out of 12 tasks, and performs similarly in the other 4 tasks. For the expert datasets, all methods including simple BC (behaviour cloning) can perform well, among which ours is the most competitive on all four tasks. For human and cloned datasets, almost all methods fail to learn effective policies on three tasks except the Pen task. For the Pen task, CSVE is the only one that succeeds to learn a good policy on the human dataset, while it can learn a medium policy on the cloned dataset as BC and PBRL. 5.2 SENSITIVENESS OF HYPER-PARAMETERS We anaylyze hyper-parameter β, which trades off between behaviour cloning and policy optimization. For smaller values, the objective behaves similarly to behavior cloning (weights are close for all actions), while for larger values, it attempts to recover the maximum of the Q-function. To quantitatively analyze its effect, we test different β from {0.1, 3, 10} in mujoco tasks with the medium-type datasets, whose results are shown in Fig. 1. We can see that λ has effect on the policy performance during training. Empirically, we found out that in general, β = 3.0 is suitable for such medium type datasets. Besides, in practice, by default we use β = 3.0 for random and medium task while 0.1 for medium-replay, medium-expert and expert datasets. 6 RELATED WORK Offline RL (Fujimoto et al., 2019; Levine et al., 2020) aims to learn a reasonable policy from a static dataset collected by arbitrary policies, without further interactions with the environment. Compared to interactive RL, offline RL suffers two critical inherent issues, i.e., the distribution drift introduced by off-policy learning and the out-of-distribution extrapolation in value estimation (Ostrovski et al., 2021; Levine et al., 2020). The common mind of offline RL algorithms is to incorporate conservatism or regularization into the online RL algorithms. Here we briefly review the prior work with a comparison to ours. Conservative value estimation: Prior offline RL algorithms regularize the learning policy close to the data or explicitly estimated behaviour policy) and penalize the exploration to the out-ofdistribution region, via distribution correction estimation (Dai et al., 2020; Yang et al., 2020), policy constraints with support matching (Wu et al., 2019) and distributional matching Fujimoto et al. (2019); Kumar et al. (2019), applying policy divergence based penalty on Q-functions (Kostrikov et al., 2021a; Wang et al., 2020) or uncertainty-based penalty (Agarwal et al., 2020) on Q-functions and conservative Q-function estimation (Kumar et al., 2020). Besides, model-based algorithms (Yu et al., 2020) directly estimate dynamics uncertainty and translated it into reward penalty. Different from these prior work that imposes conservatism on state-action pairs or actions, ours directly does such conservative estimation on states and requires no explicit uncertainty quantification. With learned conservative value estimation, an offline policy can be learned via implicit derivation from a state-action joint distribution or in Q-Learning and actor-critic framework. In this paper, our implementation adopts the method proposed in AWAC (Nair et al., 2020; Peng et al., 2019a). Model-based algorithms: Model-based offline RL learns the dynamics model from the static dataset and uses it to quantify uncertainty (Yu et al., 2020), data augmentention (Yu et al., 2021) with roll-outs, or planning (Kidambi et al., 2020; Chen et al., 2021). Such methods typically rely on wide data coverage when planning and data augmentation with roll-outs, and low model estimation error when estimating uncertainty, which is often difficult to satisfy in reality and leads to policy instability. Instead, we use the model to sample the next-step states only reachable from data, which has no such strict requirements on data coverage or model bias. Theoretical results: Our theoretical results are derived from conservative Q-value estimation (CQL) and safe policy improvement (Laroche et al., 2019). Besides, COMBO (Yu et al., 2021) gives a result of conservative but tighter value estimation than CQL, when dataset is augmented with model-based roll-outs. Compared to our result, COMBO’s lower bounds additionally assume same initial state distribution which may not always satisfy in continuous control. 7 DISCUSSION In this paper, we propose a new approach for offline RL based on conservative value estimation on states and discussed how the theoretical results could lead to the new RL algorithms. In particular, we developed a practical actor-critic algorithm, in which the critic does conservative state value estimation by incorporating the penalty of the model predictive next-states into Bellman iterations, and the actor does the advantage weighted policy updates with a bonus of exploring states with conservative values. Experimental evaluation shows that our method performs better than alternative methods based on conservative Q-function estimation and is competitive among the SOTA methods, confirming our theoretical analysis well. Moving forward, we hope to explore the design of more powerful algorithms based on conservative state value estimation. A PROOFS We first redefine notation for clarity and then provide the proofs of the results in the main paper. Notation. Let k ∈ N denote an iteration of policy evaluation(in Section 3.2). V k denotes the true, tabular (or functional) V-function iterate in the MDP, without any correction. V̂ k denotes the approximate tabular (or functional) V-function iterate. The empirical Bellman operator can be expressed as follows: (B̂πV̂ k)(s) = Ea∼π(a|s)r̂(s, a) + γ ∑ s′ Ea∼π(a|s)P̂ (s ′|s, a)[V̂ k(s′)] (13) where r̂(s, a) is the empirical average reward obtained in the dataset when performing action a at state s . The true Bellman operator can be expressed as follows: (BπV k)(s) = Ea∼π(a|s)r(s, a) + γ ∑ s′ Ea∼π(a|s)P (s ′|s, a)[V k(s′)] (14) Now we first prove that the iteration in Eq.3 has a fixed point. Assume state value function is lower bounded, i.e., V (s) ≥ C ∀s ∈ S, then Eq.3 can always be solved with Eq.4. Thus, we only need to investigate the iteration in Eq.4. Denote the iteration as a function operator T π on V . Suppose supp d ⊆ supp du, then the operator T π is a γ-contraction in L∞ norm where γ is the discounting factor. Proof of Lemma 1: Let V and V ′ are any two state value functions with the same support, i.e., suppV = suppV ′. |(T πV − T πV ′)(s)| = ∣∣∣∣(B̂πV (s)− α[ d(s)du(s) − 1])− (B̂πV ′(s)− α[ d(s)du(s) − 1]) ∣∣∣∣ = ∣∣∣B̂πV (s)− B̂πV ′(s)∣∣∣ =|(Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V (s′)) − (Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V ′(s′))| =γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ||T πV − T πV ′||∞ =max s |(T πV − T πV ′)(s)| =max s γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ≤γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)max s′′ |V (s′′)− V ′(s′′)| =γmax s′′ |V (s′′)− V ′(s′′)| =γ||(V − V ′)||∞ We present the bound on using empirical Bellman operator compared to the true Bellman operator. Following previous work Kumar et al. (2020), we make the following assumptions that: Pπ is the transition matrix coupled with policy, specifically, PπV (s) = Ea′∼π(a′|s′),s′∼P (s′|s,a′)[V (s′)] Assumption 1. ∀s, a ∈ M, the following relationships hold with at least (1 − δ) (δ ∈ (0, 1)) probability, |r − r(s, a)| ≤ Cr,δ√ |D(s, a)| , ||P̂ (s′|s, a)− P (s′|s, a)||1 ≤ CP,δ√ |D(s, a)| (15) Under this assumption, the absolute difference between the empirical Bellman operator and the actual one can be calculated as follows: |(B̂π)V̂ k − (Bπ)V̂ k)| = Ea∼π(a|s)|r − r(s, a) + γ ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a)− P (s′|s, a))[V̂ k(s′)]| (16) ≤ Ea∼π(a|s)|r − r(s, a)|+ γ| ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a′)− P (s′|s, a′))[V̂ k(s′)]| (17) ≤ Ea∼π(a|s) Cr,δ + γCP,δ2Rmax/(1− γ)√ |D(s, a)| (18) Thus, the estimation error due to sampling error can be bounded by a constant as a function of Cr,δ and Ct,δ . We define this constant as Cr,T,δ . Thus we obtain: ∀V, s ∈ D, |B̂πV (s)− BπV (s)| ≤ Ea∼π(a|s) Cr,t,δ (1− γ) √ |D(s, a)| (19) Next we provide an important lemma. Lemma 2. (Interpolation Lemma) For any f ∈ [0, 1], and any given distribution ρ(s), let df be an f-interpolation of ρ and D, i.e.,df (s) := fd(s) + (1 − f)ρ(s), let v(ρ, f) := Es∼ρ(s)[ρ(s)−d(s)df (s) ], then v(ρ, f) satisfies v(ρ, f) ≥ 0. The proof can be found in Yu et al. (2021). By setting f as 1, we have Es∼ρ(s)[ ρ(s)−d(s) d(s) ] > 0. Proof of Theorem 1: The V function of approximate dynamic programming in iteration k can be obtained as: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1] ∀s, k (20) The fixed point: V̂ π(s) = B̂πV̂ π(s)− α[ d(s) du(s) − 1] ≤ BπV̂ π(s) + Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α[ d(s) du(s) − 1] (21) Thus we obtain: V̂ π(s) ≤ V π(s) + (I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α(I − γPπ)−1[ d(s) du(s) − 1] (22) , where Pπ is the transition matrix coupled with the policy π and PπV (s) = Ea′∼π(a′|s′)s′∼P (s′|s,a′)[V (s ′)]. Then the expectation of V π(s) under distribution d(s) satisfies: Es∼d(s)V̂ π(s) ≤Es∼d(s)(V π(s)) + Es∼d(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| −αEs∼d(s)(I − γPπ)−1[ d(s) du(s) − 1])︸ ︷︷ ︸ >0 (23) When α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| Es∼d(s)[ d(s) du(s) −1]) , Es∼d(s)V̂ π(s) ≤ Es∼d(s)(V π(s)). Proof of Theorem 2: The expectation of V π(s) under distribution d(s) satisfies: Es∼du(s)V̂ π(s) ≤Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − αEs∼du(s)(I − γP π)−1[ d(s) du(s) − 1]) (24) Noticed that the last term:∑ s∼du(s) ( df (s) du(s) − 1) = ∑ s du(s)( df (s) du(s) − 1) = ∑ s df (s)− ∑ s du(s) = 0 (25) We obtain that: Es∼du(s)V̂ π(s) ≤ Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| (26) Proof of Theorem 3: Recall that the expression of the V-function iterate is given by: V̂ k+1(s) = Bπ k V̂ k(s)− α[ d(s) du(s) − 1]∀s, k (27) Now the expectation of V π(s) under distribution du(s) is given by: Es∼du(s)V̂ k+1(s) = Es∼du(s) [ Bπ k V̂ k(s)− α[ d(s) du(s) − 1] ] = Es∼du(s)B πk V̂ k(s) (28) The expectation of V π(s) under distribution d(s) is given by: Es∼d(s)V̂ k+1(s) = Es∼d(s)Bπ k V̂ k(s)−α[ d(s) du(s) −1] = Es∼d(s)Bπ k V̂ k(s)−αEs∼d(s)[ d(s) du(s) −1] (29) Thus we can show that: Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) = Es∼du(s)B πk V̂ k(s)− Es∼d(s)Bπ k V̂ k(s) + αEs∼d(s)[ d(s) du(s) − 1] = Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s)− Es∼d(s)[Bπ k (V̂ k − V k)] + Es∼du(s)[B πk(V̂ k − V k)] + αEs∼d(s)[ d(s) du(s) − 1] (30) By choosing α: α > Es∼d(s)[Bπ k (V̂ k − V k)]− Es∼du(s)[Bπ k (V̂ k − V k)] Es∼d(s)[ d(s) du(s) − 1] (31) We have Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) > Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s) hold. Proof of Theorem 4: V̂ is obtained by solving the recursive Bellman fixed point equation in the empirical MDP, with an altered reward, r(s, a) − α[ d(s)du(s) − 1], hence the optimal policy π ∗(a|s) obtained by optimizing the value under Eq. 4. Proof of Theorem 5: The proof of this statement is divided into two parts. We first relates the return of π∗ in the empirical MDP M̂ with the return of πβ , we can get: J(π∗, M̂)− α 1 1− γ Es∼dπ∗ M̂ (s)[ d(s) du(s) − 1] ≥ J(πβ , M̂)− 0 = J(πβ , M̂) (32) The next step is to bound the difference between J(πβ , M̂) and J(πβ ,M) and the difference between J(π∗, M̂) and J(π∗,M). We quote a useful lemma from Kumar et al. (2020) (Lemma D.4.1): Lemma 3. For any MDPM , an empirical MDP M̂ generated by sampling actions according to the behavior policy πβ and a given policy π, |J(π, M̂)−J(π,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s) πβ(a|s) )] (33) Setting π in the above lemma as πβ , we get: |J(πβ , M̂)− J(πβ ,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] (34) , given that √ Ea∼π∗(a|s)[ π∗(a|s) πβ(a|s) ] is a pointwise upper bound of √ Ea∼πβ(a|s)[ πβ(a|s) πβ(a|s) ](Kumar et al. (2020)). Thus we get, J(π∗, M̂) ≥ J(πβ , M̂)− 2( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] + α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (35) , which completes the proof. Here, the second term is sampling error which occurs due to mismatch of M̂ and M ; the third term denotes the increase in policy performance due to CSVE in M̂ . Note that when the first term is small, the smaller value of α are able to provide an improvement compared to the behavior policy. B CSVE ALGORITHM Now we put all in section 4 together and describe the practical deep offline reinforcement learning algorithm. In particular, the dynamic model model, value functions and policy are all parameterized with deep neural networks and trained via stochastic gradient decent methods. The pseudo code is given in Alg. 1. Algorithm 1: CSVE Algorithm Input : Data D = {(s, a, r, s′)} Parameters: Qθ, Vψ , πϕ, Qθ, Mν Hyperparameters: α, λ, learning rates ηθ, ηψ, ηϕ, ω begin // Train transition model with the static dataset D 1 Mν ← train(D); // Train the conservative value and policy functions 2 Initialize function parameters θ0, ψ0, ϕ0, θ0 = θ0; 3 foreach step k = 1→ N do 4 ψk ← ψk−1 − ηψ∇ψLπV (Vψ; Q̂θk); 5 θk ← θk−1 − ηθ∇θLπQ(Qθ; V̂ψk); 6 ϕk ← ϕk−1 − ηϕ∇ϕL+π (πϕ); 7 θk ← ωθk−1 + (1− ω)θk; C IMPLEMENTATION DETAIL We implement our method based on an offline deep reinforcement learning library d3rlpy (Seno & Imai, 2021). The code is available at https://github.com/iclr20234089/code4098. The detailed hyper-parameters are provided in Table 3 D EXTENDED EXPERIMENTAL RESULTS D.1 MORE EXPERIMENTS ON HYPER-PARAMETERS EFFECT We also investigated λ values of {0.0, 0.1, 0.5, 1.0} in the medium tasks. The results are shown in Fig. 4. D.2 COMPARISON WITH PESSIMISM ON Q We implement an ablation version of our method–penalty-Q, which directlly penalize the value of state action pairs. Specifically, we change the critic loss function into : Q̂k+1 ← argmin Q LπQ(Q; Q̂ k) = α ( Es∼D,a′∼π(·|s)[Q(s, a′)]− Es,a∼D[Q(s, a)] ) + Es,a,s′∼D [( r(s, a) + γQ̂k+1(s′, a′)−Q(s, a) )2] (36) We use the same policy extraction method and test this method on the medium-task, in which the data is collected using a medium-performed policy. In all the three tasks, the performance of the penalty-Q are worse than the the original implementation, the penalty-V counterpart. When penalty is on the state-action pair, as illustrated by our theoretical discussion, the value of the evaluated Q value tends to pointwise lower bounds the true Q value, which results in a more conservative and thus worse policy. While when we penalize V, the estimated value function only bounds the expectation of the true V function, which results in a more flexible and well-performed policy. D.3 RELATIONSHIP BETWEEN MODEL BIAS AND FINAL PERFORMANCE As stated in the main paper, compared to normal model-based offline RL algorithms, CSVE is insensitive to model biases. To understand this quantitatively, now we investigate the effect of model biases to the performance. We use the the dynamic model’s average L2 error on transition prediction as the surrogate of model biases. As shown in Fig. 4, in CSVE, the model bias has very little effect to RL performance. Particularly, for halfcheetah there is observed effect of model errors to scores, while in hopper and walker2d with increasing errors, the scores have a slight downward trend where the decrease is relatively very small. D.4 REPRODUCTION OF COMBO In the main body of this paper, our results of COMBO adopt the results presented in literature (Rigter et al., 2022). Our goal here is to look into more details of COMBO’s asymptotic performance evaluated during training. For comparison fairness, we adopt the official COMBO code provided by author, and rerun to evaluate with the medium dataset of D4RL mujoco v2. Fig. 5 shows the asymptotic performance until 1000 epochs, in which the scores have been normalized with corresponding SAC performance. We found that in both hopper and walker2d, the scores show dramatic fluctuations. The average scores of last 10 epochs for halfcheetah, hopper and walker2d are 71.7, 65.3 and -0.26 in respect. Besides, we found even in D4RL v0 dataset, COMBO’s behaviours are similar. 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e halfcheetah_v2: Score of Return Average 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e hopper_v2: Score of Return Average 0 200 400 600 800 1000 2 0 2 4 6 8 10 Sc or e walker2d_v2: Score of Return Average Figure 5: Return of COMBO on D4RL mujoco v2 tasks D.5 EFFECT OF EXPLORATION NEAR DATASET DISTRIBUTIONS As discussed in Section 3.1 and 4.2, proper choice of exploration on the distribution (d) beyond data (du) should help policy improvement. The factor λ in Eq. 12 controls the trade-off on such ’bonus’ exploration and complying the data-implied behaviour policy. Let us take the medium-replay type of datasets to analyze its effect. In the experiments, with fixed β = 0.1, we investigate λ values of {0.0, 0.5, 1.0, 3.0}. As shown in the upper figures in Fig. 6, λ shows obvious effect to policy performance and variances during training. In general, there is a value under which increasing λ leads to performance improvement, while above which further increasing λ hurts performance. For example, with λ = 3.0 in hopper-medium-replay task and walker2d-medium-replay task, the performance get worse than with smaller λ values. The value of λ is task-specific, and we find that its effect is highly related to the loss in Eq. 11 which can be observed by comparing bottom and upper figures in Fig. 6. Thus, in practice, we can choose proper λ according to the above loss without online interaction.
1. What is the focus and contribution of the paper on offline reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical properties and empirical performance? 3. What are the weaknesses of the paper, especially regarding its similarity to other existing algorithms and potential lacking comparisons? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a new algorithm--CSVE--that learns conservative estimates of state value functions by penalizing values of OOD states. The authors prove that the estimated state value functions are lower-bounds of the true value in expectation over any state distribution. Finally, the authors evaluate their algorithm against state-of-the-art offline RL baselines. Strengths And Weaknesses Strengths: (1) The paper has a clear organization and is well-written. (2) CSVE achieves impressive empirical performance against strong offline RL baselines. Weaknesses: (1) The algorithm itself is very similar to existing conservative offline RL algorithms, particularly CQL and COMBO. To my understanding, the authors do demonstrate theoretical and empirical advantages over CQL but not COMBO. In my opinion, COMBO is more similar as both CSVE and COMBO are model-based in the sense that they require learning a dynamics model. (2) Though the backbone of the CSVE algorithm is clear, the authors add several extensions such as learning an ensemble of transition models (which COMBO does not do), and using a separate policy extraction step rather than alternating value- and policy-improvement as CQL, COMBO do. It is not clear from the empirical analysis whether those changes are important to the empirical performance, or the fact that the penalty is over states rather than state-action pairs (which is the central novelty of the proposed algorithm). The authors should consider such ablations, as well as a comparison to COMBO, in the experiments to make their results more compelling. Clarity, Quality, Novelty And Reproducibility The paper is well-written and organized well. I am primarily concerned with the novelty of the contribution, as it is very similar to existing algorithms as CQL or COMBO.
ICLR
Title Effective Offline Reinforcement Learning via Conservative State Value Estimation Abstract Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. N/A Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. 1 INTRODUCTION Reinforcement Learning (RL), which learns to act by interacting with the environment, has achieved remarkable success in various tasks. However, in most real applications, it is impossible to learn online from scratch as exploration is often risky and unsafe. Instead, offline RL((Fujimoto et al., 2019; Lange et al., 2012)) avoids this problem by learning the policy solely from historical data. However, the naive approach, which directly uses online RL algorithms to learn from a static dataset, suffers from the problems of value over-estimation and policy extrapolation on OOD (out-of-distribution) states or actions. Recently, conservative value estimation, being conservative on states and actions where there are no enough samples, has been put forward as a principle to effectively solve offline RL ((Shi et al., 2022; Kumar et al., 2020; Buckman et al., 2020). Prior methods, e.g., Conservative Q-Learning (CQL Kumar et al. (2020)), avoid the value over-estimation problem by systematically underestimating the Q values of OOD actions on the states in the dataset. In practice, it is often too pessimistic and thus leads to overly conservative algorithms. COMBO (Yu et al., 2021) leverages a learnt dynamic model to augment data in an interpolation way, and then learn a Q function that is less conservative than CQL and derives a better policy in potential. In this paper, we propose CSVE(Conservative State Value Estimation), a new offline RL approach. Unlike the above traditional methods that estimate conservative values by penalizing Q-function on OOD states or actions, CSVE directly penalizing the V-function on OOD states. We prove in theory that CSVE has tighter bounds on true state values than CQL, and same bounds as COMBO but under more general discounted state distributions which leads to more space for algorithm design. Our main contributions are as follows. • The conservative state value estimation with related theoretical analysis. We prove that it lower bounds the real state values in expectation over any state distribution that is used to sample OOD states, and is up-bounded by the real values in expectation over the marginal state distribution of the dataset plus a constant term depending on only sampling errors. Compared to prior work, it has several advantages to derive a better policy in potential. • A practical Actor-Critic implementation. It approximately estimates the conservative state values in the offline context and improves the policy via advantage weighting updates. In particular, we use a dynamics model to generalize over in-distribution space and sample OOD states that are directly reachable from the dataset. • Experimental evaluation on continuous control tasks of Gym (Brockman et al., 2016) and Adroit (Rajeswaran et al., 2017) in D4RL (Fu et al., 2020) benchmarks, showing that CSVE performs better than prior methods based on conservative Q-value estimation, and is strongly competitive among main SOTA offline RL algorithms. 2 PRELIMINARIES Offline Reinforcement Learning. Consider the Markov Decision Process M := (S,A, P, r, ρ, γ), which consists of the state space S, the action spaceA, the transition model P : S×A → ∆(S), the reward function r : S × A → R, the initial state distribution ρ and the discount factor γ ∈ (0, 1]. A stochastic policy π : S → ∆(A) takes an action in probability given the current state. A transition is the tuple (st, at, rt, st+1) where at ∼ π(·|st), st+1 ∼ P (·|st, at) and rt = r(st, at). We assume that the reward values satisfy |r(s, a)| ≤ Rmax,∀s, a. A trajectory under π is the random sequence τ = (s0, a0, r0, s1, a1, r1, . . . , sT ) which consists of continuous transitions starting from s0 ∼ ρ. The standard RL is to learn a policy π ∈ Π that maximize the future cumulative rewards Jπ(M) = EM,π[ ∑∞ t=0 γ trt] via active interaction with the environment M . At any time t, for the policy π, the value function of state is defined as V π(s) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s], and the Q value function is Qπ(s, a) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s, at = a]. The Bellman operator is a function projection: BπQ(s, a) := r(s, a) + γEs′∼P (·|s,a),a′∼π(·|s′)[Q(s′, a′)], or BπV (s) := Ea∼π(·|s)[r(s, a) + γEs′∼P (·|s,a)[V (s′)]], which leads to iterative value updates. Bellman consistency implies that V π(s) = BπV π(s),∀s and Qπ(s) = BπQπ(s, a),∀s, a. In practice with function approximation, we use the empirical Bellman operator B̂π where the former expectations are estimated with data. The offline RL is to learn the policy π from a static dataset D = {(s, a, r, s′)} consisting of transitions collected by any behaviour policy, aiming to behave well in the online environment. Note that, unlike the standard online RL, offline RL cannot interact with the environment during learning. Conservative Value Estimation. One main challenge in offline RL is the over-estimation of values introduced by extrapolation on unseen states and actions, which may make the learned policy collapse. To address this issue, conservatism or pessimism are used in value estimation, e.g. CQL learns a conservative Q-value function by penalizing the value of unseen actions on states: Q̂k+1 ← argmin Q α (Es∼D,a∼µ(a|s)[Q(s, a)]− Es∼D,a∼π̂β(a|s)[Q(s, a)]) + 1 2 Es,a,s′∼D[(Q(s, a)− β̂πQ̂k(s, a))2] (1) where π̂β and π are the behaviour policy and learnt policy separately, µ is any arbitrary policy different from π̂β , and α the factor for trade-off of conservatism. Constrained Policy Optimization. To address the issues of distribution drift between learning policy and behaviour policy, one approach is to constrain the learning policy close to the behaviour policy (Bai et al., 2021; Wu et al., 2019; Nair et al., 2020; Levine et al., 2020; Fujimoto et al., 2019). Here we take Advantage Weighted Regression(Peng et al. (2019b); Nair et al. (2020)) which adopts an implicit KL divergence to constrain the distance of policies as example: πk+1 ← argmax π Es,a∼D [ log π(a|s) 1 Z(s) exp ( 1 λ Aπ k (s, a) )] (2) where Aπ k is the advantage of policy πk, and Z the normalization constant for s. Model-based Offline RL. In RL, the model is an approximation of the MDP M . We denote a model as M̂ := (S,A, P̂ , r̂, ρ, γ), where P̂ and r̂ are approximations of P and r respectively. In the setting of offline RL, the model is used to roll out and augment data (Yu et al., 2020; 2021) or act as a surrogate of real environment to interact with agent (Kidambi et al., 2020). In this paper, we use model to sample the next states that are approximately reachable from the dataset. 3 CONSERVATIVE STATE VALUE ESTIMATION In the offline setting, the value overestimation is a major problem resulting in failure of learning a reasonable policy (Levine et al., 2020; Fujimoto et al., 2019). In contrast to prior works(Kumar et al., 2020; Yu et al., 2021) that get conservative value estimation via penalizing Q function for OOD state-action pairs , we directly penalize V function for OOD states. Our approach provides several novel theoretic results that allow better trade-off of conservative value estimation and policy improvement. All proofs of our theorems can be found in Appendix A. 3.1 CONSERVATIVE OFF-POLICY EVALUATION Our approach is an alternative approach to CQL(Kumar et al., 2020). Instead of learning a conservative Q function, we aim to conservatively estimate the value V π(s) of a target policy π given a dataset D to avoid overestimation of out-of-distribution states. To achieve this, we penalize the V-values evaluated on states that is more likely to be out-of-distribution and pushing up the V-values on states that is in the distribution of the dataset, which is achieved through the following iteration: V̂ k+1 ← argmin V 1 2 Es∼du(s)[(B̂πV̂ k(s)− V (s))2] + α(Es′∼d(s)V (s′)− Es∼du(s)V (s)) (3) where du(s) is the discounted state distribution of D, d(s) is any state distribution, and B̂π is the empirical Bellman operator (see appendix for more details). Considering the setting without function approximation, by setting the derivative of Eq. 3 as zero, the V function found by approximate dynamic programming in iteration k can be obtained: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1], ∀s, k. (4) Denote the function projection on V̂ k in Eq. 4 as T π . We have Lemma 1, and thus V̂ k converges to a unique fixed point. Lemma 1. For any d with supp d ⊆ supp du, T π is a γ-contraction in L∞ norm. Theorem 1. For any d with supp d ⊆ supp du (d ̸= du), with a sufficiently large α (i.e., α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| /Es∼d(s)[ d(s)du(s) − 1])), the expected value of the estimation V̂ π(s) under d(s) is the lower bound of the true value, that is: Es∼d(s)[V̂ π(s)] ≤ Es∼d(s)[V π(s)]. V̂ π(s) = limk→∞ V̂ k(s) is the converged value estimation with the datasetD, and Cr,t,δRmax (1−γ) √ |D(s,a)| is related to sampling error introduced by the use empirical rather than Bellman operator. If the counts of each state-action pair is greater than zero, |D(s, a)| denotes a vector of size |S||A| containing counts for each state-action pair. If the counts of this state action pair is zero, the corresponding 1√ |D(s,a)| is large but finite value. We assume that with probability ≥ 1 − δ, the sampling error is less than Cr,t,δRmax (1−γ) √ |D(s,a)| , while Cr,t,δ is a constant (See appendix for more details.) Note that if the sampling error is ignorable, α > 0 can guarantee the lower bound results. Theorem 2. The expected value of the estimation V̂ π(s) under the state distribution of the original dataset is the lower bound of the true value plus the term of irreducible sampling error, that is: Es∼du(s)[V̂ π(s)] ≤ Es∼du(s)[V π(s)] + Es∼du(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| . , where Pπ refers to the transition matrix coupled with policy π (see Appendix for details). Now we show that, during iterations, the gap between the value of in-distribution state and out-ofdistribution state in the estimated V-function is higher than in the true V-functions. Theorem 3. At any iteration k, with a large enough α, our method expands the difference in expected V-values under the chosen state distribution and the dataset state distribution, that is: Es∼du(s)[V̂ k(s)]− Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)]− Es∼d(s)[V k(s)]. In the policy extraction part, this property enables our policy to take actions a in state s(s ∼ D) that remains in distribution instead of out of distribution, given that our estimated V-function does not overestimate the erroneous out-of-distribution states compared to the in-distribution states. Now we present four remarks to explain how the above theorems guide applications of Eq. 3 in offline RL algorithms. Remark 1. In Eq. 3, if d = du, the penalty on out-of-distribution states degenerates, which means that the policy should not reach states with low support in data, and consequently never explore the unseen actions at the state. Indeed, AWAC Nair et al. (2020) adopts this setting. We show that with proper choice of d different from du, our method performs better than AWAC in practice. Remark 2. Theorem 2 implies that under du, the marginal state distribution of data, the expectation estimated value of π should either be lower than the true value, or higher than the true value but within a threshold. This fact motivates our advantage weighted policy update method in Eq. 11. Remark 3. Theorem 1implies that under d, say the discounted state distribution of any policy, the expectation estimated value of π should lower bounds the true value. This fact motivates our policy improvement method of unifying advantage weighted update with a bonus exploration in Eq. 12. Remark 4. Theorem 3 states Es∼d(s)[V k(s)] − Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)] − Es∼du(s)[V̂ k(s)]. That is to say, under the distribution d, the amount of value under-estimation in expectation is larger than that of the behaviour policy du. With proper choice of d, it is safe and effective to derive a new and potentially better policy with V̂ k. Our algorithm choose the distribution of model predictive next-states as d, i.e., s′ ∼ d is implemented by s ∼ D, a ∼ π(·|s), s′ ∼ P̂ (·|s, a), which effectively builds a soft ’river’ with low values around the dataset. Comparison with prior work: CQL (Eq.1), which penalizes Q-function of OOD actions on states in history data, guarantees the lower bounds on state-wise value estimation: V̂ π(s) = Eπ(a|s)(Q̂ π(s, a)) ≤ Eπ(a|s)(Qπ(s, a)) = V π(s) for all s ∈ D. COMBO, which penalizes Qfunction of OOD states and actions of an interpolation of history data and model-based roll-outs, guarantees the lower bound of state value expectation: Es∼µ0 [V̂ π(s)] ≤ Es∼µ0 [V π(s)] where µ0 is the initial state distribution (Remark 1, section A.2 of COMBO Yu et al. (2021)); which is a special case of our result in Theorem 1 when d = µ0. Although both CSVE and COMBO intend to get better performance by releasing conservative estimation guarantee from the state-wise values to expectation of state values, CSVE get the same lower bounds but under more general state distribution. This provide more flexible space for algorithm design, and it is also one main reason of penalizing on V rather than Q. By controlling distance of d to the behaviour policy’s discounted state distribution dβ , CSVE has the potential of more performance improvement. Note that bounding E[V [s]], rather than state-wise V (s), would introduce a more adventurous policy, which would achieves better performance in in-distribution states and have more risk behaviors in OOD states. To deal with that limitation, we introduce a deep ensemble dynamic model to sample the OOD states for better estimation. 3.2 SAFE POLICY IMPROVEMENT GUARANTEES Following prior works (Laroche et al. (2019); Kumar et al. (2020); Yu et al. (2021)), we show that our method has the safe policy improvement guarantees against the data-implied behaviour policy. We first show that our method optimizes a penalized RL empirical objective: Theorem 4. Let V̂ π be the fixed point of Equation 3, then π∗(a|s) = argmaxπ V̂ π(s) is equivalently obtained by solving: π∗(a|s)← argmax π J(π, M̂)− α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (5) Building upon Theorem 4, we show that our method provides a ζ-safe policy improvement over πβ Theorem 5. Let π∗(a|s) be the policy obtained in Equation 5. Then, it is a ζ-safe policy improvement over π̂β in the actual MDP M, i.e., J(π∗,M) ≥ J(π̂β ,M) − ζ with high probability 1- δ, where ζ is given by: ζ = 2( Cr,δ 1−γ + γRmaxCT,δ (1−γ)2 )Es∼dπM̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s)πβ(a|s) )]− (J(π ∗, M̂)− J(π̂β , M̂))︸ ︷︷ ︸ ≥α 11−γ Es∼dπ M̂ (s)[ d(s) du(s) −1] . (6) 4 METHODOLOGY In this section, we propose a practical Actor-Critic method for computing conservative value estimation function by approximately solving Equation 3 and taking advantage weighted policy updates. It is mainly motivated by the theoretic results, as explained by the four remarks in section 3.1. Besides, the full algorithm of deep learning implementation is presented in Appendix B. 4.1 CONSERVATIVE VALUE ESTIMATION Given the access to a dataset D collected by some behaviour policy πβ , our aim is to estimate the value function V π for a target policy π. As stated in section 3, to prevent the value overestimation, we instead learn a conservative value function V̂ π that lower bounds the real values of π by adding a penalty on out-of-distribution states into the flow of Bellman projections. Our method consists of the following iterative updates of Equations 7- 9, where Q̂k is the target network of Q̂k. V̂ k+1 ← argmin V LπV (V ; Q̂ k) = α ( Es∼D,a∼π(·|s),s′∼P̂ (s,a)[V (s ′)]− Es∼D[V (s)] ) + Es∼D [ (Ea∼π(·|s)[Q̂k(s, a)]− V (s))2 ] (7) Q̂k+1 ← argmin Q LπQ(Q; V̂ k+1) = Es,a,s′∼D [( r(s, a) + γV̂ k+1(s′)−Q(s, a) )2] (8) Q̂k+1 ← ωQ̂k + (1− ω)Q̂k+1 (9) The RHS of Eq. 7 is an approximation of Eq. 3, where the first term gives out-of-distribution states a penalty, and the second term follows the definition of V values and Q values. In Eq. 8, the RHS is TD errors estimated on transitions in the dataset D. Note that the target term here uses the sum of the immediate reward r(s, a) and the next step state’s value V̂ k+1(s′). In Eq. 9, the target Q values are updated with a soft interpolation factor ω ∈ (0, 1). Q̂k changes slower than Q̂k, which makes the TD error estimation in Eq. 7 more stable. Constrained policy. Note that in RHS of Eq. 7, we use a ∼ π(·|s) in expectation. To safely estimate the target value of V (s) by Ea∼π(·|s)[Q̂(s, a)], almost always requires supp(π(·|s)) ⊂ supp(πβ(·|s)). We achieves this by the advantage weighted policy update, which forces π(·|s) have significant probability mass on actions taken by πβ in data, as detailed in section 3.2. Model-based OOD state sampling. In Eq. 7, we implement the state sampling process s′ ∼ d in Eq. 3 as a flow of {s ∼ D; a ∼ π(a|s); s′ ∼ P̂ (s′|s, a)}, that is the distribution of the predictive next-states from D by following π. It is beneficial in practice. On one hand, this method is efficient to sample only the states that are approximately reachable fromD by one step, rather than to sample the whole state space. On the other hand, we only need the model to do one-step prediction such that no bootstrapped errors due to long horizon are introduced. Following previous work (Janner et al., 2019; Yu et al., 2020; 2021), we implement the probabilistic dynamics model using an ensemble of deep neural networks {pθ1, . . . , pθB}. Each neural network produces a Gaussian distribution over the next state and reward: P iθ(st+1, r|st, at) = N (uiθ(st, at), σiθ(st, at)). Adaptive penalty factor α. The pessimism level is controlled by the parameter α ≥ 0. In practice, we set α adaptive during training as follows, which is similar as that in CQL(Kumar et al. (2020)) max α≥0 [α(Es′∼d[Vψ(s′)]− Es∼D[Vψ(s)]− τ)] (10) , where τ is a budget parameter. If the expected difference in V-values is less than τ , α will decrease. Otherwise, α will increase and penalize the out of distribution state values more aggressively. Discussion: As stated in former sections, our method focuses on estimating conservative state value for learning a policy. The effectiveness of adding conservatism on V function are two folds. First, penalizing V values is with a smaller hypothesis space than penalizing Q, which would reduce the computation complexity. Second, penalizing V values can achieve a more relaxed lower bound than penalizing Q with ignoring the explicitly marginalization on Q values. A more relaxed lower bound guarantees more opportunities on achieving better policy. 4.2 ADVANTAGE WEIGHTED POLICY UPDATES After learning the conservative V̂ k+1 and Q̂k+1 (or V̂ π and Q̂π when converged), we improve the policy by the following advantage weighted policy update (Nair et al., 2020). π ← argmin π′ Lπ(π ′) = −Es,a∼D [ log π′(a|s) exp ( βÂk+1(s, a) )] where Âk+1(s, a) = Q̂k+1(s, a)− V̂ k+1(s). (11) Eq. 11 updates the policy π to amounts of weighted maximum likelihood which are computed by re-weighting state-action samples in D with estimated advantage Âk+1. As discussed in the AWAC (Nair et al., 2020), this method avoids explicitly estimating the behaviour policy and its resulted sampling errors which is an import issue in the offline RL setting (Kumar et al., 2020). Implicit policy constraints. We adopt the advantage weighted policy updates which imposes an implicit KL divergence constraints between π and πβ . This policy constraint is necessary to guarantee that the next state s′ in Equation 7 can be safely generated through policy π. As derived in Nair et al. (2020) (Appendix A), the Eq. 11 is an parametric solution of the following problem: max π′ Ea∼π′(·|s)[Âk+1(s, a)] s.t. DKL(π′(·|s) ∥ πβ(·|s)) ≤ ϵ, ∫ a π′(a|s)da = 1. Note that DKL(π′ ∥ πβ) is an reserve KL divergence with respect to π′, which is mode-seeking ((Shlens, 2014)). When treated as Lagrangian it forces π′ allocate its probability mass to the maximum likelihood supports of πβ , re-weighted by the estimated advantage. In other words, for the space of A where πβ(·|s) has no samples, π′(·|s) has almost zero probability mass too. Bonus of Exploration on Near States. As suggested by remarks in Section 3.1, in practice allowing the policy explore the predicated next states transition (s ∼ D) following a ∼ π′(·|s)) leads to better test performance. With this kind of exploration, the policy is updated as follows. π ← argmin π′ L+π (π ′) = Lπ(π ′)− λEs∼D,a∼π′(s),s′∼P̂ (s,a) [ r(s, a) + V̂ k+1(s′) ] (12) The second term is an approximation to Es∼dπ(s)[V π(s)], while the first term is the approximation ofEs∼du(s)[V π(s)]. While the choice of λ is ultimately just a hyper-parameter, we balance between optimistic policy optimization (in maximizing V) and constrained policy update (the first term) by adjusting λ. 5 EXPERIMENTS The primary goal of this section is to investigate whether the proposed tighter conservative value estimation leads to performance improvement. Besides, we would like to ascertain when further exploration has benefits and how well CSVE performs compared with SOTA algorithms. We evaluate our method on classical continuous control tasks of Gym(Brockman et al., 2016) and Adroit(Rajeswaran et al., 2017) in the standard D4RL (Fu et al. (2020)) benchmark. The Gym control tasks include HalfCHeetah, Hopper and Walker2D, each with 5 datasets collected by following different types of policies (random, medium, medium-replay, medium-expert, and expert). The Adroid tasks include Pen, Hammer, Door and Relocate, each with 3 dataset collected by different policies (human, cloned, and expert). Our method, namely CSVE, and the compared baselines are CQL(Kumar et al., 2020), COMBO(Yu et al., 2021), AWACNair et al. (2020), PBRL(Bai et al., 2021) and other SOTA algorithms TD3BC(Fujimoto & Gu, 2021), UWAC(Wu et al., 2021), IQL(Kostrikov et al., 2021b), BEAR(Kumar et al., 2019)) whose performance results are public or have high-quality open implementations. CQL which estimates the conservative Q values on state-action pairs rather than states, is the direct comparing method to ours. COMBO also lower bounds the estimated V function. AWAC is one special case of our Eq. 3 when d = du. PBRL is a very strong performant in offline RL, but is quite costly on computation since it uses the ensemble of hundreds of sub-models. 5.1 OVERALL PERFORMANCE We first test on the Gym control tasks. We train our methods for 1 million steps and report the final evaluation performance. The overall results are shown in Table 1. Compared to CQL, our method has better performance on 11 of 15 tasks and similar performance on others. In particular, our method shows consistent advantage on the datasets that generated by following random or suboptimal policies (random and medium). Compared to AWAC, our method has better performance on 9 of 15 tasks and comparable performance on others, which demonstrates the effect of our further exploration beyond cloning the behaviour policy. In particular, our method shows an obvious Table 4 in Bai et al. (2021). advantage in extrating the best policy on data of mixed policy (Medium Expert) while AWAC can not. Compared to COMBO, our method has better performance on 6 out 12 tasks and comparable performance or slightly worse on others, which demonstrates the effect of our better bounds on V. In particular, our method shows an obvious advantage in extrating the best policy on medium and medium-expert tasks. In 9 tasks evaluated, our method gets higher score than IQL in 7 of them, and has similar performance in the other tasks. Finally, our method performs close to PBRL, even PBRL has almost orders of more model capacity and computation cost. We now evaluate our method on the Adroit tasks. For CSVE, we report the final evaluation results after training in 0.1 million steps. The full results are reported in Table2. Copared to IQL, our method performs better in 8 out of 12 tasks, and performs similarly in the other 4 tasks. For the expert datasets, all methods including simple BC (behaviour cloning) can perform well, among which ours is the most competitive on all four tasks. For human and cloned datasets, almost all methods fail to learn effective policies on three tasks except the Pen task. For the Pen task, CSVE is the only one that succeeds to learn a good policy on the human dataset, while it can learn a medium policy on the cloned dataset as BC and PBRL. 5.2 SENSITIVENESS OF HYPER-PARAMETERS We anaylyze hyper-parameter β, which trades off between behaviour cloning and policy optimization. For smaller values, the objective behaves similarly to behavior cloning (weights are close for all actions), while for larger values, it attempts to recover the maximum of the Q-function. To quantitatively analyze its effect, we test different β from {0.1, 3, 10} in mujoco tasks with the medium-type datasets, whose results are shown in Fig. 1. We can see that λ has effect on the policy performance during training. Empirically, we found out that in general, β = 3.0 is suitable for such medium type datasets. Besides, in practice, by default we use β = 3.0 for random and medium task while 0.1 for medium-replay, medium-expert and expert datasets. 6 RELATED WORK Offline RL (Fujimoto et al., 2019; Levine et al., 2020) aims to learn a reasonable policy from a static dataset collected by arbitrary policies, without further interactions with the environment. Compared to interactive RL, offline RL suffers two critical inherent issues, i.e., the distribution drift introduced by off-policy learning and the out-of-distribution extrapolation in value estimation (Ostrovski et al., 2021; Levine et al., 2020). The common mind of offline RL algorithms is to incorporate conservatism or regularization into the online RL algorithms. Here we briefly review the prior work with a comparison to ours. Conservative value estimation: Prior offline RL algorithms regularize the learning policy close to the data or explicitly estimated behaviour policy) and penalize the exploration to the out-ofdistribution region, via distribution correction estimation (Dai et al., 2020; Yang et al., 2020), policy constraints with support matching (Wu et al., 2019) and distributional matching Fujimoto et al. (2019); Kumar et al. (2019), applying policy divergence based penalty on Q-functions (Kostrikov et al., 2021a; Wang et al., 2020) or uncertainty-based penalty (Agarwal et al., 2020) on Q-functions and conservative Q-function estimation (Kumar et al., 2020). Besides, model-based algorithms (Yu et al., 2020) directly estimate dynamics uncertainty and translated it into reward penalty. Different from these prior work that imposes conservatism on state-action pairs or actions, ours directly does such conservative estimation on states and requires no explicit uncertainty quantification. With learned conservative value estimation, an offline policy can be learned via implicit derivation from a state-action joint distribution or in Q-Learning and actor-critic framework. In this paper, our implementation adopts the method proposed in AWAC (Nair et al., 2020; Peng et al., 2019a). Model-based algorithms: Model-based offline RL learns the dynamics model from the static dataset and uses it to quantify uncertainty (Yu et al., 2020), data augmentention (Yu et al., 2021) with roll-outs, or planning (Kidambi et al., 2020; Chen et al., 2021). Such methods typically rely on wide data coverage when planning and data augmentation with roll-outs, and low model estimation error when estimating uncertainty, which is often difficult to satisfy in reality and leads to policy instability. Instead, we use the model to sample the next-step states only reachable from data, which has no such strict requirements on data coverage or model bias. Theoretical results: Our theoretical results are derived from conservative Q-value estimation (CQL) and safe policy improvement (Laroche et al., 2019). Besides, COMBO (Yu et al., 2021) gives a result of conservative but tighter value estimation than CQL, when dataset is augmented with model-based roll-outs. Compared to our result, COMBO’s lower bounds additionally assume same initial state distribution which may not always satisfy in continuous control. 7 DISCUSSION In this paper, we propose a new approach for offline RL based on conservative value estimation on states and discussed how the theoretical results could lead to the new RL algorithms. In particular, we developed a practical actor-critic algorithm, in which the critic does conservative state value estimation by incorporating the penalty of the model predictive next-states into Bellman iterations, and the actor does the advantage weighted policy updates with a bonus of exploring states with conservative values. Experimental evaluation shows that our method performs better than alternative methods based on conservative Q-function estimation and is competitive among the SOTA methods, confirming our theoretical analysis well. Moving forward, we hope to explore the design of more powerful algorithms based on conservative state value estimation. A PROOFS We first redefine notation for clarity and then provide the proofs of the results in the main paper. Notation. Let k ∈ N denote an iteration of policy evaluation(in Section 3.2). V k denotes the true, tabular (or functional) V-function iterate in the MDP, without any correction. V̂ k denotes the approximate tabular (or functional) V-function iterate. The empirical Bellman operator can be expressed as follows: (B̂πV̂ k)(s) = Ea∼π(a|s)r̂(s, a) + γ ∑ s′ Ea∼π(a|s)P̂ (s ′|s, a)[V̂ k(s′)] (13) where r̂(s, a) is the empirical average reward obtained in the dataset when performing action a at state s . The true Bellman operator can be expressed as follows: (BπV k)(s) = Ea∼π(a|s)r(s, a) + γ ∑ s′ Ea∼π(a|s)P (s ′|s, a)[V k(s′)] (14) Now we first prove that the iteration in Eq.3 has a fixed point. Assume state value function is lower bounded, i.e., V (s) ≥ C ∀s ∈ S, then Eq.3 can always be solved with Eq.4. Thus, we only need to investigate the iteration in Eq.4. Denote the iteration as a function operator T π on V . Suppose supp d ⊆ supp du, then the operator T π is a γ-contraction in L∞ norm where γ is the discounting factor. Proof of Lemma 1: Let V and V ′ are any two state value functions with the same support, i.e., suppV = suppV ′. |(T πV − T πV ′)(s)| = ∣∣∣∣(B̂πV (s)− α[ d(s)du(s) − 1])− (B̂πV ′(s)− α[ d(s)du(s) − 1]) ∣∣∣∣ = ∣∣∣B̂πV (s)− B̂πV ′(s)∣∣∣ =|(Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V (s′)) − (Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V ′(s′))| =γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ||T πV − T πV ′||∞ =max s |(T πV − T πV ′)(s)| =max s γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ≤γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)max s′′ |V (s′′)− V ′(s′′)| =γmax s′′ |V (s′′)− V ′(s′′)| =γ||(V − V ′)||∞ We present the bound on using empirical Bellman operator compared to the true Bellman operator. Following previous work Kumar et al. (2020), we make the following assumptions that: Pπ is the transition matrix coupled with policy, specifically, PπV (s) = Ea′∼π(a′|s′),s′∼P (s′|s,a′)[V (s′)] Assumption 1. ∀s, a ∈ M, the following relationships hold with at least (1 − δ) (δ ∈ (0, 1)) probability, |r − r(s, a)| ≤ Cr,δ√ |D(s, a)| , ||P̂ (s′|s, a)− P (s′|s, a)||1 ≤ CP,δ√ |D(s, a)| (15) Under this assumption, the absolute difference between the empirical Bellman operator and the actual one can be calculated as follows: |(B̂π)V̂ k − (Bπ)V̂ k)| = Ea∼π(a|s)|r − r(s, a) + γ ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a)− P (s′|s, a))[V̂ k(s′)]| (16) ≤ Ea∼π(a|s)|r − r(s, a)|+ γ| ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a′)− P (s′|s, a′))[V̂ k(s′)]| (17) ≤ Ea∼π(a|s) Cr,δ + γCP,δ2Rmax/(1− γ)√ |D(s, a)| (18) Thus, the estimation error due to sampling error can be bounded by a constant as a function of Cr,δ and Ct,δ . We define this constant as Cr,T,δ . Thus we obtain: ∀V, s ∈ D, |B̂πV (s)− BπV (s)| ≤ Ea∼π(a|s) Cr,t,δ (1− γ) √ |D(s, a)| (19) Next we provide an important lemma. Lemma 2. (Interpolation Lemma) For any f ∈ [0, 1], and any given distribution ρ(s), let df be an f-interpolation of ρ and D, i.e.,df (s) := fd(s) + (1 − f)ρ(s), let v(ρ, f) := Es∼ρ(s)[ρ(s)−d(s)df (s) ], then v(ρ, f) satisfies v(ρ, f) ≥ 0. The proof can be found in Yu et al. (2021). By setting f as 1, we have Es∼ρ(s)[ ρ(s)−d(s) d(s) ] > 0. Proof of Theorem 1: The V function of approximate dynamic programming in iteration k can be obtained as: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1] ∀s, k (20) The fixed point: V̂ π(s) = B̂πV̂ π(s)− α[ d(s) du(s) − 1] ≤ BπV̂ π(s) + Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α[ d(s) du(s) − 1] (21) Thus we obtain: V̂ π(s) ≤ V π(s) + (I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α(I − γPπ)−1[ d(s) du(s) − 1] (22) , where Pπ is the transition matrix coupled with the policy π and PπV (s) = Ea′∼π(a′|s′)s′∼P (s′|s,a′)[V (s ′)]. Then the expectation of V π(s) under distribution d(s) satisfies: Es∼d(s)V̂ π(s) ≤Es∼d(s)(V π(s)) + Es∼d(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| −αEs∼d(s)(I − γPπ)−1[ d(s) du(s) − 1])︸ ︷︷ ︸ >0 (23) When α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| Es∼d(s)[ d(s) du(s) −1]) , Es∼d(s)V̂ π(s) ≤ Es∼d(s)(V π(s)). Proof of Theorem 2: The expectation of V π(s) under distribution d(s) satisfies: Es∼du(s)V̂ π(s) ≤Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − αEs∼du(s)(I − γP π)−1[ d(s) du(s) − 1]) (24) Noticed that the last term:∑ s∼du(s) ( df (s) du(s) − 1) = ∑ s du(s)( df (s) du(s) − 1) = ∑ s df (s)− ∑ s du(s) = 0 (25) We obtain that: Es∼du(s)V̂ π(s) ≤ Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| (26) Proof of Theorem 3: Recall that the expression of the V-function iterate is given by: V̂ k+1(s) = Bπ k V̂ k(s)− α[ d(s) du(s) − 1]∀s, k (27) Now the expectation of V π(s) under distribution du(s) is given by: Es∼du(s)V̂ k+1(s) = Es∼du(s) [ Bπ k V̂ k(s)− α[ d(s) du(s) − 1] ] = Es∼du(s)B πk V̂ k(s) (28) The expectation of V π(s) under distribution d(s) is given by: Es∼d(s)V̂ k+1(s) = Es∼d(s)Bπ k V̂ k(s)−α[ d(s) du(s) −1] = Es∼d(s)Bπ k V̂ k(s)−αEs∼d(s)[ d(s) du(s) −1] (29) Thus we can show that: Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) = Es∼du(s)B πk V̂ k(s)− Es∼d(s)Bπ k V̂ k(s) + αEs∼d(s)[ d(s) du(s) − 1] = Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s)− Es∼d(s)[Bπ k (V̂ k − V k)] + Es∼du(s)[B πk(V̂ k − V k)] + αEs∼d(s)[ d(s) du(s) − 1] (30) By choosing α: α > Es∼d(s)[Bπ k (V̂ k − V k)]− Es∼du(s)[Bπ k (V̂ k − V k)] Es∼d(s)[ d(s) du(s) − 1] (31) We have Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) > Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s) hold. Proof of Theorem 4: V̂ is obtained by solving the recursive Bellman fixed point equation in the empirical MDP, with an altered reward, r(s, a) − α[ d(s)du(s) − 1], hence the optimal policy π ∗(a|s) obtained by optimizing the value under Eq. 4. Proof of Theorem 5: The proof of this statement is divided into two parts. We first relates the return of π∗ in the empirical MDP M̂ with the return of πβ , we can get: J(π∗, M̂)− α 1 1− γ Es∼dπ∗ M̂ (s)[ d(s) du(s) − 1] ≥ J(πβ , M̂)− 0 = J(πβ , M̂) (32) The next step is to bound the difference between J(πβ , M̂) and J(πβ ,M) and the difference between J(π∗, M̂) and J(π∗,M). We quote a useful lemma from Kumar et al. (2020) (Lemma D.4.1): Lemma 3. For any MDPM , an empirical MDP M̂ generated by sampling actions according to the behavior policy πβ and a given policy π, |J(π, M̂)−J(π,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s) πβ(a|s) )] (33) Setting π in the above lemma as πβ , we get: |J(πβ , M̂)− J(πβ ,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] (34) , given that √ Ea∼π∗(a|s)[ π∗(a|s) πβ(a|s) ] is a pointwise upper bound of √ Ea∼πβ(a|s)[ πβ(a|s) πβ(a|s) ](Kumar et al. (2020)). Thus we get, J(π∗, M̂) ≥ J(πβ , M̂)− 2( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] + α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (35) , which completes the proof. Here, the second term is sampling error which occurs due to mismatch of M̂ and M ; the third term denotes the increase in policy performance due to CSVE in M̂ . Note that when the first term is small, the smaller value of α are able to provide an improvement compared to the behavior policy. B CSVE ALGORITHM Now we put all in section 4 together and describe the practical deep offline reinforcement learning algorithm. In particular, the dynamic model model, value functions and policy are all parameterized with deep neural networks and trained via stochastic gradient decent methods. The pseudo code is given in Alg. 1. Algorithm 1: CSVE Algorithm Input : Data D = {(s, a, r, s′)} Parameters: Qθ, Vψ , πϕ, Qθ, Mν Hyperparameters: α, λ, learning rates ηθ, ηψ, ηϕ, ω begin // Train transition model with the static dataset D 1 Mν ← train(D); // Train the conservative value and policy functions 2 Initialize function parameters θ0, ψ0, ϕ0, θ0 = θ0; 3 foreach step k = 1→ N do 4 ψk ← ψk−1 − ηψ∇ψLπV (Vψ; Q̂θk); 5 θk ← θk−1 − ηθ∇θLπQ(Qθ; V̂ψk); 6 ϕk ← ϕk−1 − ηϕ∇ϕL+π (πϕ); 7 θk ← ωθk−1 + (1− ω)θk; C IMPLEMENTATION DETAIL We implement our method based on an offline deep reinforcement learning library d3rlpy (Seno & Imai, 2021). The code is available at https://github.com/iclr20234089/code4098. The detailed hyper-parameters are provided in Table 3 D EXTENDED EXPERIMENTAL RESULTS D.1 MORE EXPERIMENTS ON HYPER-PARAMETERS EFFECT We also investigated λ values of {0.0, 0.1, 0.5, 1.0} in the medium tasks. The results are shown in Fig. 4. D.2 COMPARISON WITH PESSIMISM ON Q We implement an ablation version of our method–penalty-Q, which directlly penalize the value of state action pairs. Specifically, we change the critic loss function into : Q̂k+1 ← argmin Q LπQ(Q; Q̂ k) = α ( Es∼D,a′∼π(·|s)[Q(s, a′)]− Es,a∼D[Q(s, a)] ) + Es,a,s′∼D [( r(s, a) + γQ̂k+1(s′, a′)−Q(s, a) )2] (36) We use the same policy extraction method and test this method on the medium-task, in which the data is collected using a medium-performed policy. In all the three tasks, the performance of the penalty-Q are worse than the the original implementation, the penalty-V counterpart. When penalty is on the state-action pair, as illustrated by our theoretical discussion, the value of the evaluated Q value tends to pointwise lower bounds the true Q value, which results in a more conservative and thus worse policy. While when we penalize V, the estimated value function only bounds the expectation of the true V function, which results in a more flexible and well-performed policy. D.3 RELATIONSHIP BETWEEN MODEL BIAS AND FINAL PERFORMANCE As stated in the main paper, compared to normal model-based offline RL algorithms, CSVE is insensitive to model biases. To understand this quantitatively, now we investigate the effect of model biases to the performance. We use the the dynamic model’s average L2 error on transition prediction as the surrogate of model biases. As shown in Fig. 4, in CSVE, the model bias has very little effect to RL performance. Particularly, for halfcheetah there is observed effect of model errors to scores, while in hopper and walker2d with increasing errors, the scores have a slight downward trend where the decrease is relatively very small. D.4 REPRODUCTION OF COMBO In the main body of this paper, our results of COMBO adopt the results presented in literature (Rigter et al., 2022). Our goal here is to look into more details of COMBO’s asymptotic performance evaluated during training. For comparison fairness, we adopt the official COMBO code provided by author, and rerun to evaluate with the medium dataset of D4RL mujoco v2. Fig. 5 shows the asymptotic performance until 1000 epochs, in which the scores have been normalized with corresponding SAC performance. We found that in both hopper and walker2d, the scores show dramatic fluctuations. The average scores of last 10 epochs for halfcheetah, hopper and walker2d are 71.7, 65.3 and -0.26 in respect. Besides, we found even in D4RL v0 dataset, COMBO’s behaviours are similar. 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e halfcheetah_v2: Score of Return Average 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e hopper_v2: Score of Return Average 0 200 400 600 800 1000 2 0 2 4 6 8 10 Sc or e walker2d_v2: Score of Return Average Figure 5: Return of COMBO on D4RL mujoco v2 tasks D.5 EFFECT OF EXPLORATION NEAR DATASET DISTRIBUTIONS As discussed in Section 3.1 and 4.2, proper choice of exploration on the distribution (d) beyond data (du) should help policy improvement. The factor λ in Eq. 12 controls the trade-off on such ’bonus’ exploration and complying the data-implied behaviour policy. Let us take the medium-replay type of datasets to analyze its effect. In the experiments, with fixed β = 0.1, we investigate λ values of {0.0, 0.5, 1.0, 3.0}. As shown in the upper figures in Fig. 6, λ shows obvious effect to policy performance and variances during training. In general, there is a value under which increasing λ leads to performance improvement, while above which further increasing λ hurts performance. For example, with λ = 3.0 in hopper-medium-replay task and walker2d-medium-replay task, the performance get worse than with smaller λ values. The value of λ is task-specific, and we find that its effect is highly related to the loss in Eq. 11 which can be observed by comparing bottom and upper figures in Fig. 6. Thus, in practice, we can choose proper λ according to the above loss without online interaction.
1. What is the focus of the paper in terms of offline reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like CQL? 3. Do you have concerns regarding the theoretical analysis and algorithmic design? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions about the experiments that the reviewer would like answered?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper A major challenge in offline RL is the distribution mismatch between the behavior policy generating the offline data and the target policy we want to learn. Such a mismatch can result in an overestimation error in the Bellman update, which can further result in divergence. The paper proposes to learn a conservative state value estimate (in contrast to the existing CQL, which learns conservative action-value estimates). The conservative state-value estimate is learned by subtracting a term from the regular Bellman backup. The term is proportional to the density ratio of the target and behavior distribution. The main theory shows the bound of the expected state value (i.e., expectation over states) under both the offline data distribution and the target policy distribution. In algorithmic design, the conservative state value estimate is used to learn a critic in an actor-critic algorithm. Experiments on Mujoco domains are conducted to show its effectiveness. Strengths And Weaknesses Strength: The paper studies an interesting topic which could be practically useful to a broad range of RL applications. The paper has a reasonably clear presentation. Weaknesses: The basic idea is incremental to CQL. The actions can be marginalized from the CQL’s bound to get the state value bound. But the paper does not discuss any connections. On close examination, both the theoretical result and algorithmic design are highly similar to CQL. The theorems are not informative. First, the theoretical results are bounding E[V(s)], but CQL provides a bound for any state-action pair. Bounding E[V(s)] does not give a guarantee for any state value estimate’s conservativeness. Second, before showing the bound, shouldn’t the convergence be shown first? To me, it is even unclear if the proposed conservative state value Bellman update would converge or not. Experiments. I expect to see clear evidence to show at least three things: 1). The necessity of learning a conservative state value, rather than action-value; 2). The the effectiveness of using the proposed method to learn conservative state value, rather than other methods (for example, using CQL and use the learned policy to get a state value estimate); 3). The comparison to IQL (Offline RL with implicit q learning by Ilya et al.), which has a similar high-level idea. IQL attempts to avoid overestimation by avoiding learning action values. Clarity, Quality, Novelty And Reproducibility Please see above section.
ICLR
Title Effective Offline Reinforcement Learning via Conservative State Value Estimation Abstract Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. N/A Offline RL seeks to learn effective policies solely from historical data, which expects to perform well in the online environment. However, it faces a major challenge of value over-estimation introduced by the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is adding a penalty term to reward or value estimation in the Bellman iterations, which has given rise to a number of successful algorithms such as CQL. Meanwhile, to avoid extrapolation on unseen states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose CSVE, a new approach that learns conservative V-function via directly imposing penalty on out-of-distribution states. We prove that for the evaluated policy, our conservative state value estimation satisfies: (1) over the state distribution that samples penalizing states, it lower bounds the true values in expectation, and (2) over the marginal state distribution of data, it is no more than the true values in expectation plus a constant decided by sampling error. Further, we develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states around the dataset, and the actor applies advantage weighted updates to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods (e.g., CQL) and is strongly competitive among recent SOTA methods. 1 INTRODUCTION Reinforcement Learning (RL), which learns to act by interacting with the environment, has achieved remarkable success in various tasks. However, in most real applications, it is impossible to learn online from scratch as exploration is often risky and unsafe. Instead, offline RL((Fujimoto et al., 2019; Lange et al., 2012)) avoids this problem by learning the policy solely from historical data. However, the naive approach, which directly uses online RL algorithms to learn from a static dataset, suffers from the problems of value over-estimation and policy extrapolation on OOD (out-of-distribution) states or actions. Recently, conservative value estimation, being conservative on states and actions where there are no enough samples, has been put forward as a principle to effectively solve offline RL ((Shi et al., 2022; Kumar et al., 2020; Buckman et al., 2020). Prior methods, e.g., Conservative Q-Learning (CQL Kumar et al. (2020)), avoid the value over-estimation problem by systematically underestimating the Q values of OOD actions on the states in the dataset. In practice, it is often too pessimistic and thus leads to overly conservative algorithms. COMBO (Yu et al., 2021) leverages a learnt dynamic model to augment data in an interpolation way, and then learn a Q function that is less conservative than CQL and derives a better policy in potential. In this paper, we propose CSVE(Conservative State Value Estimation), a new offline RL approach. Unlike the above traditional methods that estimate conservative values by penalizing Q-function on OOD states or actions, CSVE directly penalizing the V-function on OOD states. We prove in theory that CSVE has tighter bounds on true state values than CQL, and same bounds as COMBO but under more general discounted state distributions which leads to more space for algorithm design. Our main contributions are as follows. • The conservative state value estimation with related theoretical analysis. We prove that it lower bounds the real state values in expectation over any state distribution that is used to sample OOD states, and is up-bounded by the real values in expectation over the marginal state distribution of the dataset plus a constant term depending on only sampling errors. Compared to prior work, it has several advantages to derive a better policy in potential. • A practical Actor-Critic implementation. It approximately estimates the conservative state values in the offline context and improves the policy via advantage weighting updates. In particular, we use a dynamics model to generalize over in-distribution space and sample OOD states that are directly reachable from the dataset. • Experimental evaluation on continuous control tasks of Gym (Brockman et al., 2016) and Adroit (Rajeswaran et al., 2017) in D4RL (Fu et al., 2020) benchmarks, showing that CSVE performs better than prior methods based on conservative Q-value estimation, and is strongly competitive among main SOTA offline RL algorithms. 2 PRELIMINARIES Offline Reinforcement Learning. Consider the Markov Decision Process M := (S,A, P, r, ρ, γ), which consists of the state space S, the action spaceA, the transition model P : S×A → ∆(S), the reward function r : S × A → R, the initial state distribution ρ and the discount factor γ ∈ (0, 1]. A stochastic policy π : S → ∆(A) takes an action in probability given the current state. A transition is the tuple (st, at, rt, st+1) where at ∼ π(·|st), st+1 ∼ P (·|st, at) and rt = r(st, at). We assume that the reward values satisfy |r(s, a)| ≤ Rmax,∀s, a. A trajectory under π is the random sequence τ = (s0, a0, r0, s1, a1, r1, . . . , sT ) which consists of continuous transitions starting from s0 ∼ ρ. The standard RL is to learn a policy π ∈ Π that maximize the future cumulative rewards Jπ(M) = EM,π[ ∑∞ t=0 γ trt] via active interaction with the environment M . At any time t, for the policy π, the value function of state is defined as V π(s) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s], and the Q value function is Qπ(s, a) := EM,π[ ∑∞ k=0 γ t+krt+k|st = s, at = a]. The Bellman operator is a function projection: BπQ(s, a) := r(s, a) + γEs′∼P (·|s,a),a′∼π(·|s′)[Q(s′, a′)], or BπV (s) := Ea∼π(·|s)[r(s, a) + γEs′∼P (·|s,a)[V (s′)]], which leads to iterative value updates. Bellman consistency implies that V π(s) = BπV π(s),∀s and Qπ(s) = BπQπ(s, a),∀s, a. In practice with function approximation, we use the empirical Bellman operator B̂π where the former expectations are estimated with data. The offline RL is to learn the policy π from a static dataset D = {(s, a, r, s′)} consisting of transitions collected by any behaviour policy, aiming to behave well in the online environment. Note that, unlike the standard online RL, offline RL cannot interact with the environment during learning. Conservative Value Estimation. One main challenge in offline RL is the over-estimation of values introduced by extrapolation on unseen states and actions, which may make the learned policy collapse. To address this issue, conservatism or pessimism are used in value estimation, e.g. CQL learns a conservative Q-value function by penalizing the value of unseen actions on states: Q̂k+1 ← argmin Q α (Es∼D,a∼µ(a|s)[Q(s, a)]− Es∼D,a∼π̂β(a|s)[Q(s, a)]) + 1 2 Es,a,s′∼D[(Q(s, a)− β̂πQ̂k(s, a))2] (1) where π̂β and π are the behaviour policy and learnt policy separately, µ is any arbitrary policy different from π̂β , and α the factor for trade-off of conservatism. Constrained Policy Optimization. To address the issues of distribution drift between learning policy and behaviour policy, one approach is to constrain the learning policy close to the behaviour policy (Bai et al., 2021; Wu et al., 2019; Nair et al., 2020; Levine et al., 2020; Fujimoto et al., 2019). Here we take Advantage Weighted Regression(Peng et al. (2019b); Nair et al. (2020)) which adopts an implicit KL divergence to constrain the distance of policies as example: πk+1 ← argmax π Es,a∼D [ log π(a|s) 1 Z(s) exp ( 1 λ Aπ k (s, a) )] (2) where Aπ k is the advantage of policy πk, and Z the normalization constant for s. Model-based Offline RL. In RL, the model is an approximation of the MDP M . We denote a model as M̂ := (S,A, P̂ , r̂, ρ, γ), where P̂ and r̂ are approximations of P and r respectively. In the setting of offline RL, the model is used to roll out and augment data (Yu et al., 2020; 2021) or act as a surrogate of real environment to interact with agent (Kidambi et al., 2020). In this paper, we use model to sample the next states that are approximately reachable from the dataset. 3 CONSERVATIVE STATE VALUE ESTIMATION In the offline setting, the value overestimation is a major problem resulting in failure of learning a reasonable policy (Levine et al., 2020; Fujimoto et al., 2019). In contrast to prior works(Kumar et al., 2020; Yu et al., 2021) that get conservative value estimation via penalizing Q function for OOD state-action pairs , we directly penalize V function for OOD states. Our approach provides several novel theoretic results that allow better trade-off of conservative value estimation and policy improvement. All proofs of our theorems can be found in Appendix A. 3.1 CONSERVATIVE OFF-POLICY EVALUATION Our approach is an alternative approach to CQL(Kumar et al., 2020). Instead of learning a conservative Q function, we aim to conservatively estimate the value V π(s) of a target policy π given a dataset D to avoid overestimation of out-of-distribution states. To achieve this, we penalize the V-values evaluated on states that is more likely to be out-of-distribution and pushing up the V-values on states that is in the distribution of the dataset, which is achieved through the following iteration: V̂ k+1 ← argmin V 1 2 Es∼du(s)[(B̂πV̂ k(s)− V (s))2] + α(Es′∼d(s)V (s′)− Es∼du(s)V (s)) (3) where du(s) is the discounted state distribution of D, d(s) is any state distribution, and B̂π is the empirical Bellman operator (see appendix for more details). Considering the setting without function approximation, by setting the derivative of Eq. 3 as zero, the V function found by approximate dynamic programming in iteration k can be obtained: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1], ∀s, k. (4) Denote the function projection on V̂ k in Eq. 4 as T π . We have Lemma 1, and thus V̂ k converges to a unique fixed point. Lemma 1. For any d with supp d ⊆ supp du, T π is a γ-contraction in L∞ norm. Theorem 1. For any d with supp d ⊆ supp du (d ̸= du), with a sufficiently large α (i.e., α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| /Es∼d(s)[ d(s)du(s) − 1])), the expected value of the estimation V̂ π(s) under d(s) is the lower bound of the true value, that is: Es∼d(s)[V̂ π(s)] ≤ Es∼d(s)[V π(s)]. V̂ π(s) = limk→∞ V̂ k(s) is the converged value estimation with the datasetD, and Cr,t,δRmax (1−γ) √ |D(s,a)| is related to sampling error introduced by the use empirical rather than Bellman operator. If the counts of each state-action pair is greater than zero, |D(s, a)| denotes a vector of size |S||A| containing counts for each state-action pair. If the counts of this state action pair is zero, the corresponding 1√ |D(s,a)| is large but finite value. We assume that with probability ≥ 1 − δ, the sampling error is less than Cr,t,δRmax (1−γ) √ |D(s,a)| , while Cr,t,δ is a constant (See appendix for more details.) Note that if the sampling error is ignorable, α > 0 can guarantee the lower bound results. Theorem 2. The expected value of the estimation V̂ π(s) under the state distribution of the original dataset is the lower bound of the true value plus the term of irreducible sampling error, that is: Es∼du(s)[V̂ π(s)] ≤ Es∼du(s)[V π(s)] + Es∼du(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| . , where Pπ refers to the transition matrix coupled with policy π (see Appendix for details). Now we show that, during iterations, the gap between the value of in-distribution state and out-ofdistribution state in the estimated V-function is higher than in the true V-functions. Theorem 3. At any iteration k, with a large enough α, our method expands the difference in expected V-values under the chosen state distribution and the dataset state distribution, that is: Es∼du(s)[V̂ k(s)]− Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)]− Es∼d(s)[V k(s)]. In the policy extraction part, this property enables our policy to take actions a in state s(s ∼ D) that remains in distribution instead of out of distribution, given that our estimated V-function does not overestimate the erroneous out-of-distribution states compared to the in-distribution states. Now we present four remarks to explain how the above theorems guide applications of Eq. 3 in offline RL algorithms. Remark 1. In Eq. 3, if d = du, the penalty on out-of-distribution states degenerates, which means that the policy should not reach states with low support in data, and consequently never explore the unseen actions at the state. Indeed, AWAC Nair et al. (2020) adopts this setting. We show that with proper choice of d different from du, our method performs better than AWAC in practice. Remark 2. Theorem 2 implies that under du, the marginal state distribution of data, the expectation estimated value of π should either be lower than the true value, or higher than the true value but within a threshold. This fact motivates our advantage weighted policy update method in Eq. 11. Remark 3. Theorem 1implies that under d, say the discounted state distribution of any policy, the expectation estimated value of π should lower bounds the true value. This fact motivates our policy improvement method of unifying advantage weighted update with a bonus exploration in Eq. 12. Remark 4. Theorem 3 states Es∼d(s)[V k(s)] − Es∼d(s)[V̂ k(s)] > Es∼du(s)[V k(s)] − Es∼du(s)[V̂ k(s)]. That is to say, under the distribution d, the amount of value under-estimation in expectation is larger than that of the behaviour policy du. With proper choice of d, it is safe and effective to derive a new and potentially better policy with V̂ k. Our algorithm choose the distribution of model predictive next-states as d, i.e., s′ ∼ d is implemented by s ∼ D, a ∼ π(·|s), s′ ∼ P̂ (·|s, a), which effectively builds a soft ’river’ with low values around the dataset. Comparison with prior work: CQL (Eq.1), which penalizes Q-function of OOD actions on states in history data, guarantees the lower bounds on state-wise value estimation: V̂ π(s) = Eπ(a|s)(Q̂ π(s, a)) ≤ Eπ(a|s)(Qπ(s, a)) = V π(s) for all s ∈ D. COMBO, which penalizes Qfunction of OOD states and actions of an interpolation of history data and model-based roll-outs, guarantees the lower bound of state value expectation: Es∼µ0 [V̂ π(s)] ≤ Es∼µ0 [V π(s)] where µ0 is the initial state distribution (Remark 1, section A.2 of COMBO Yu et al. (2021)); which is a special case of our result in Theorem 1 when d = µ0. Although both CSVE and COMBO intend to get better performance by releasing conservative estimation guarantee from the state-wise values to expectation of state values, CSVE get the same lower bounds but under more general state distribution. This provide more flexible space for algorithm design, and it is also one main reason of penalizing on V rather than Q. By controlling distance of d to the behaviour policy’s discounted state distribution dβ , CSVE has the potential of more performance improvement. Note that bounding E[V [s]], rather than state-wise V (s), would introduce a more adventurous policy, which would achieves better performance in in-distribution states and have more risk behaviors in OOD states. To deal with that limitation, we introduce a deep ensemble dynamic model to sample the OOD states for better estimation. 3.2 SAFE POLICY IMPROVEMENT GUARANTEES Following prior works (Laroche et al. (2019); Kumar et al. (2020); Yu et al. (2021)), we show that our method has the safe policy improvement guarantees against the data-implied behaviour policy. We first show that our method optimizes a penalized RL empirical objective: Theorem 4. Let V̂ π be the fixed point of Equation 3, then π∗(a|s) = argmaxπ V̂ π(s) is equivalently obtained by solving: π∗(a|s)← argmax π J(π, M̂)− α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (5) Building upon Theorem 4, we show that our method provides a ζ-safe policy improvement over πβ Theorem 5. Let π∗(a|s) be the policy obtained in Equation 5. Then, it is a ζ-safe policy improvement over π̂β in the actual MDP M, i.e., J(π∗,M) ≥ J(π̂β ,M) − ζ with high probability 1- δ, where ζ is given by: ζ = 2( Cr,δ 1−γ + γRmaxCT,δ (1−γ)2 )Es∼dπM̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s)πβ(a|s) )]− (J(π ∗, M̂)− J(π̂β , M̂))︸ ︷︷ ︸ ≥α 11−γ Es∼dπ M̂ (s)[ d(s) du(s) −1] . (6) 4 METHODOLOGY In this section, we propose a practical Actor-Critic method for computing conservative value estimation function by approximately solving Equation 3 and taking advantage weighted policy updates. It is mainly motivated by the theoretic results, as explained by the four remarks in section 3.1. Besides, the full algorithm of deep learning implementation is presented in Appendix B. 4.1 CONSERVATIVE VALUE ESTIMATION Given the access to a dataset D collected by some behaviour policy πβ , our aim is to estimate the value function V π for a target policy π. As stated in section 3, to prevent the value overestimation, we instead learn a conservative value function V̂ π that lower bounds the real values of π by adding a penalty on out-of-distribution states into the flow of Bellman projections. Our method consists of the following iterative updates of Equations 7- 9, where Q̂k is the target network of Q̂k. V̂ k+1 ← argmin V LπV (V ; Q̂ k) = α ( Es∼D,a∼π(·|s),s′∼P̂ (s,a)[V (s ′)]− Es∼D[V (s)] ) + Es∼D [ (Ea∼π(·|s)[Q̂k(s, a)]− V (s))2 ] (7) Q̂k+1 ← argmin Q LπQ(Q; V̂ k+1) = Es,a,s′∼D [( r(s, a) + γV̂ k+1(s′)−Q(s, a) )2] (8) Q̂k+1 ← ωQ̂k + (1− ω)Q̂k+1 (9) The RHS of Eq. 7 is an approximation of Eq. 3, where the first term gives out-of-distribution states a penalty, and the second term follows the definition of V values and Q values. In Eq. 8, the RHS is TD errors estimated on transitions in the dataset D. Note that the target term here uses the sum of the immediate reward r(s, a) and the next step state’s value V̂ k+1(s′). In Eq. 9, the target Q values are updated with a soft interpolation factor ω ∈ (0, 1). Q̂k changes slower than Q̂k, which makes the TD error estimation in Eq. 7 more stable. Constrained policy. Note that in RHS of Eq. 7, we use a ∼ π(·|s) in expectation. To safely estimate the target value of V (s) by Ea∼π(·|s)[Q̂(s, a)], almost always requires supp(π(·|s)) ⊂ supp(πβ(·|s)). We achieves this by the advantage weighted policy update, which forces π(·|s) have significant probability mass on actions taken by πβ in data, as detailed in section 3.2. Model-based OOD state sampling. In Eq. 7, we implement the state sampling process s′ ∼ d in Eq. 3 as a flow of {s ∼ D; a ∼ π(a|s); s′ ∼ P̂ (s′|s, a)}, that is the distribution of the predictive next-states from D by following π. It is beneficial in practice. On one hand, this method is efficient to sample only the states that are approximately reachable fromD by one step, rather than to sample the whole state space. On the other hand, we only need the model to do one-step prediction such that no bootstrapped errors due to long horizon are introduced. Following previous work (Janner et al., 2019; Yu et al., 2020; 2021), we implement the probabilistic dynamics model using an ensemble of deep neural networks {pθ1, . . . , pθB}. Each neural network produces a Gaussian distribution over the next state and reward: P iθ(st+1, r|st, at) = N (uiθ(st, at), σiθ(st, at)). Adaptive penalty factor α. The pessimism level is controlled by the parameter α ≥ 0. In practice, we set α adaptive during training as follows, which is similar as that in CQL(Kumar et al. (2020)) max α≥0 [α(Es′∼d[Vψ(s′)]− Es∼D[Vψ(s)]− τ)] (10) , where τ is a budget parameter. If the expected difference in V-values is less than τ , α will decrease. Otherwise, α will increase and penalize the out of distribution state values more aggressively. Discussion: As stated in former sections, our method focuses on estimating conservative state value for learning a policy. The effectiveness of adding conservatism on V function are two folds. First, penalizing V values is with a smaller hypothesis space than penalizing Q, which would reduce the computation complexity. Second, penalizing V values can achieve a more relaxed lower bound than penalizing Q with ignoring the explicitly marginalization on Q values. A more relaxed lower bound guarantees more opportunities on achieving better policy. 4.2 ADVANTAGE WEIGHTED POLICY UPDATES After learning the conservative V̂ k+1 and Q̂k+1 (or V̂ π and Q̂π when converged), we improve the policy by the following advantage weighted policy update (Nair et al., 2020). π ← argmin π′ Lπ(π ′) = −Es,a∼D [ log π′(a|s) exp ( βÂk+1(s, a) )] where Âk+1(s, a) = Q̂k+1(s, a)− V̂ k+1(s). (11) Eq. 11 updates the policy π to amounts of weighted maximum likelihood which are computed by re-weighting state-action samples in D with estimated advantage Âk+1. As discussed in the AWAC (Nair et al., 2020), this method avoids explicitly estimating the behaviour policy and its resulted sampling errors which is an import issue in the offline RL setting (Kumar et al., 2020). Implicit policy constraints. We adopt the advantage weighted policy updates which imposes an implicit KL divergence constraints between π and πβ . This policy constraint is necessary to guarantee that the next state s′ in Equation 7 can be safely generated through policy π. As derived in Nair et al. (2020) (Appendix A), the Eq. 11 is an parametric solution of the following problem: max π′ Ea∼π′(·|s)[Âk+1(s, a)] s.t. DKL(π′(·|s) ∥ πβ(·|s)) ≤ ϵ, ∫ a π′(a|s)da = 1. Note that DKL(π′ ∥ πβ) is an reserve KL divergence with respect to π′, which is mode-seeking ((Shlens, 2014)). When treated as Lagrangian it forces π′ allocate its probability mass to the maximum likelihood supports of πβ , re-weighted by the estimated advantage. In other words, for the space of A where πβ(·|s) has no samples, π′(·|s) has almost zero probability mass too. Bonus of Exploration on Near States. As suggested by remarks in Section 3.1, in practice allowing the policy explore the predicated next states transition (s ∼ D) following a ∼ π′(·|s)) leads to better test performance. With this kind of exploration, the policy is updated as follows. π ← argmin π′ L+π (π ′) = Lπ(π ′)− λEs∼D,a∼π′(s),s′∼P̂ (s,a) [ r(s, a) + V̂ k+1(s′) ] (12) The second term is an approximation to Es∼dπ(s)[V π(s)], while the first term is the approximation ofEs∼du(s)[V π(s)]. While the choice of λ is ultimately just a hyper-parameter, we balance between optimistic policy optimization (in maximizing V) and constrained policy update (the first term) by adjusting λ. 5 EXPERIMENTS The primary goal of this section is to investigate whether the proposed tighter conservative value estimation leads to performance improvement. Besides, we would like to ascertain when further exploration has benefits and how well CSVE performs compared with SOTA algorithms. We evaluate our method on classical continuous control tasks of Gym(Brockman et al., 2016) and Adroit(Rajeswaran et al., 2017) in the standard D4RL (Fu et al. (2020)) benchmark. The Gym control tasks include HalfCHeetah, Hopper and Walker2D, each with 5 datasets collected by following different types of policies (random, medium, medium-replay, medium-expert, and expert). The Adroid tasks include Pen, Hammer, Door and Relocate, each with 3 dataset collected by different policies (human, cloned, and expert). Our method, namely CSVE, and the compared baselines are CQL(Kumar et al., 2020), COMBO(Yu et al., 2021), AWACNair et al. (2020), PBRL(Bai et al., 2021) and other SOTA algorithms TD3BC(Fujimoto & Gu, 2021), UWAC(Wu et al., 2021), IQL(Kostrikov et al., 2021b), BEAR(Kumar et al., 2019)) whose performance results are public or have high-quality open implementations. CQL which estimates the conservative Q values on state-action pairs rather than states, is the direct comparing method to ours. COMBO also lower bounds the estimated V function. AWAC is one special case of our Eq. 3 when d = du. PBRL is a very strong performant in offline RL, but is quite costly on computation since it uses the ensemble of hundreds of sub-models. 5.1 OVERALL PERFORMANCE We first test on the Gym control tasks. We train our methods for 1 million steps and report the final evaluation performance. The overall results are shown in Table 1. Compared to CQL, our method has better performance on 11 of 15 tasks and similar performance on others. In particular, our method shows consistent advantage on the datasets that generated by following random or suboptimal policies (random and medium). Compared to AWAC, our method has better performance on 9 of 15 tasks and comparable performance on others, which demonstrates the effect of our further exploration beyond cloning the behaviour policy. In particular, our method shows an obvious Table 4 in Bai et al. (2021). advantage in extrating the best policy on data of mixed policy (Medium Expert) while AWAC can not. Compared to COMBO, our method has better performance on 6 out 12 tasks and comparable performance or slightly worse on others, which demonstrates the effect of our better bounds on V. In particular, our method shows an obvious advantage in extrating the best policy on medium and medium-expert tasks. In 9 tasks evaluated, our method gets higher score than IQL in 7 of them, and has similar performance in the other tasks. Finally, our method performs close to PBRL, even PBRL has almost orders of more model capacity and computation cost. We now evaluate our method on the Adroit tasks. For CSVE, we report the final evaluation results after training in 0.1 million steps. The full results are reported in Table2. Copared to IQL, our method performs better in 8 out of 12 tasks, and performs similarly in the other 4 tasks. For the expert datasets, all methods including simple BC (behaviour cloning) can perform well, among which ours is the most competitive on all four tasks. For human and cloned datasets, almost all methods fail to learn effective policies on three tasks except the Pen task. For the Pen task, CSVE is the only one that succeeds to learn a good policy on the human dataset, while it can learn a medium policy on the cloned dataset as BC and PBRL. 5.2 SENSITIVENESS OF HYPER-PARAMETERS We anaylyze hyper-parameter β, which trades off between behaviour cloning and policy optimization. For smaller values, the objective behaves similarly to behavior cloning (weights are close for all actions), while for larger values, it attempts to recover the maximum of the Q-function. To quantitatively analyze its effect, we test different β from {0.1, 3, 10} in mujoco tasks with the medium-type datasets, whose results are shown in Fig. 1. We can see that λ has effect on the policy performance during training. Empirically, we found out that in general, β = 3.0 is suitable for such medium type datasets. Besides, in practice, by default we use β = 3.0 for random and medium task while 0.1 for medium-replay, medium-expert and expert datasets. 6 RELATED WORK Offline RL (Fujimoto et al., 2019; Levine et al., 2020) aims to learn a reasonable policy from a static dataset collected by arbitrary policies, without further interactions with the environment. Compared to interactive RL, offline RL suffers two critical inherent issues, i.e., the distribution drift introduced by off-policy learning and the out-of-distribution extrapolation in value estimation (Ostrovski et al., 2021; Levine et al., 2020). The common mind of offline RL algorithms is to incorporate conservatism or regularization into the online RL algorithms. Here we briefly review the prior work with a comparison to ours. Conservative value estimation: Prior offline RL algorithms regularize the learning policy close to the data or explicitly estimated behaviour policy) and penalize the exploration to the out-ofdistribution region, via distribution correction estimation (Dai et al., 2020; Yang et al., 2020), policy constraints with support matching (Wu et al., 2019) and distributional matching Fujimoto et al. (2019); Kumar et al. (2019), applying policy divergence based penalty on Q-functions (Kostrikov et al., 2021a; Wang et al., 2020) or uncertainty-based penalty (Agarwal et al., 2020) on Q-functions and conservative Q-function estimation (Kumar et al., 2020). Besides, model-based algorithms (Yu et al., 2020) directly estimate dynamics uncertainty and translated it into reward penalty. Different from these prior work that imposes conservatism on state-action pairs or actions, ours directly does such conservative estimation on states and requires no explicit uncertainty quantification. With learned conservative value estimation, an offline policy can be learned via implicit derivation from a state-action joint distribution or in Q-Learning and actor-critic framework. In this paper, our implementation adopts the method proposed in AWAC (Nair et al., 2020; Peng et al., 2019a). Model-based algorithms: Model-based offline RL learns the dynamics model from the static dataset and uses it to quantify uncertainty (Yu et al., 2020), data augmentention (Yu et al., 2021) with roll-outs, or planning (Kidambi et al., 2020; Chen et al., 2021). Such methods typically rely on wide data coverage when planning and data augmentation with roll-outs, and low model estimation error when estimating uncertainty, which is often difficult to satisfy in reality and leads to policy instability. Instead, we use the model to sample the next-step states only reachable from data, which has no such strict requirements on data coverage or model bias. Theoretical results: Our theoretical results are derived from conservative Q-value estimation (CQL) and safe policy improvement (Laroche et al., 2019). Besides, COMBO (Yu et al., 2021) gives a result of conservative but tighter value estimation than CQL, when dataset is augmented with model-based roll-outs. Compared to our result, COMBO’s lower bounds additionally assume same initial state distribution which may not always satisfy in continuous control. 7 DISCUSSION In this paper, we propose a new approach for offline RL based on conservative value estimation on states and discussed how the theoretical results could lead to the new RL algorithms. In particular, we developed a practical actor-critic algorithm, in which the critic does conservative state value estimation by incorporating the penalty of the model predictive next-states into Bellman iterations, and the actor does the advantage weighted policy updates with a bonus of exploring states with conservative values. Experimental evaluation shows that our method performs better than alternative methods based on conservative Q-function estimation and is competitive among the SOTA methods, confirming our theoretical analysis well. Moving forward, we hope to explore the design of more powerful algorithms based on conservative state value estimation. A PROOFS We first redefine notation for clarity and then provide the proofs of the results in the main paper. Notation. Let k ∈ N denote an iteration of policy evaluation(in Section 3.2). V k denotes the true, tabular (or functional) V-function iterate in the MDP, without any correction. V̂ k denotes the approximate tabular (or functional) V-function iterate. The empirical Bellman operator can be expressed as follows: (B̂πV̂ k)(s) = Ea∼π(a|s)r̂(s, a) + γ ∑ s′ Ea∼π(a|s)P̂ (s ′|s, a)[V̂ k(s′)] (13) where r̂(s, a) is the empirical average reward obtained in the dataset when performing action a at state s . The true Bellman operator can be expressed as follows: (BπV k)(s) = Ea∼π(a|s)r(s, a) + γ ∑ s′ Ea∼π(a|s)P (s ′|s, a)[V k(s′)] (14) Now we first prove that the iteration in Eq.3 has a fixed point. Assume state value function is lower bounded, i.e., V (s) ≥ C ∀s ∈ S, then Eq.3 can always be solved with Eq.4. Thus, we only need to investigate the iteration in Eq.4. Denote the iteration as a function operator T π on V . Suppose supp d ⊆ supp du, then the operator T π is a γ-contraction in L∞ norm where γ is the discounting factor. Proof of Lemma 1: Let V and V ′ are any two state value functions with the same support, i.e., suppV = suppV ′. |(T πV − T πV ′)(s)| = ∣∣∣∣(B̂πV (s)− α[ d(s)du(s) − 1])− (B̂πV ′(s)− α[ d(s)du(s) − 1]) ∣∣∣∣ = ∣∣∣B̂πV (s)− B̂πV ′(s)∣∣∣ =|(Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V (s′)) − (Ea∼π(a|s)r̂(s, a) + γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)V ′(s′))| =γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ||T πV − T πV ′||∞ =max s |(T πV − T πV ′)(s)| =max s γ ∣∣∣∣∣Ea∼π(a|s) ∑ s′ P̂ (s′|s, a)[V (s′)− V ′(s′)] ∣∣∣∣∣ ≤γEa∼π(a|s) ∑ s′ P̂ (s′|s, a)max s′′ |V (s′′)− V ′(s′′)| =γmax s′′ |V (s′′)− V ′(s′′)| =γ||(V − V ′)||∞ We present the bound on using empirical Bellman operator compared to the true Bellman operator. Following previous work Kumar et al. (2020), we make the following assumptions that: Pπ is the transition matrix coupled with policy, specifically, PπV (s) = Ea′∼π(a′|s′),s′∼P (s′|s,a′)[V (s′)] Assumption 1. ∀s, a ∈ M, the following relationships hold with at least (1 − δ) (δ ∈ (0, 1)) probability, |r − r(s, a)| ≤ Cr,δ√ |D(s, a)| , ||P̂ (s′|s, a)− P (s′|s, a)||1 ≤ CP,δ√ |D(s, a)| (15) Under this assumption, the absolute difference between the empirical Bellman operator and the actual one can be calculated as follows: |(B̂π)V̂ k − (Bπ)V̂ k)| = Ea∼π(a|s)|r − r(s, a) + γ ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a)− P (s′|s, a))[V̂ k(s′)]| (16) ≤ Ea∼π(a|s)|r − r(s, a)|+ γ| ∑ s′ Ea′∼π(a′|s′)(P̂ (s ′|s, a′)− P (s′|s, a′))[V̂ k(s′)]| (17) ≤ Ea∼π(a|s) Cr,δ + γCP,δ2Rmax/(1− γ)√ |D(s, a)| (18) Thus, the estimation error due to sampling error can be bounded by a constant as a function of Cr,δ and Ct,δ . We define this constant as Cr,T,δ . Thus we obtain: ∀V, s ∈ D, |B̂πV (s)− BπV (s)| ≤ Ea∼π(a|s) Cr,t,δ (1− γ) √ |D(s, a)| (19) Next we provide an important lemma. Lemma 2. (Interpolation Lemma) For any f ∈ [0, 1], and any given distribution ρ(s), let df be an f-interpolation of ρ and D, i.e.,df (s) := fd(s) + (1 − f)ρ(s), let v(ρ, f) := Es∼ρ(s)[ρ(s)−d(s)df (s) ], then v(ρ, f) satisfies v(ρ, f) ≥ 0. The proof can be found in Yu et al. (2021). By setting f as 1, we have Es∼ρ(s)[ ρ(s)−d(s) d(s) ] > 0. Proof of Theorem 1: The V function of approximate dynamic programming in iteration k can be obtained as: V̂ k+1(s) = B̂πV̂ k(s)− α[ d(s) du(s) − 1] ∀s, k (20) The fixed point: V̂ π(s) = B̂πV̂ π(s)− α[ d(s) du(s) − 1] ≤ BπV̂ π(s) + Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α[ d(s) du(s) − 1] (21) Thus we obtain: V̂ π(s) ≤ V π(s) + (I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − α(I − γPπ)−1[ d(s) du(s) − 1] (22) , where Pπ is the transition matrix coupled with the policy π and PπV (s) = Ea′∼π(a′|s′)s′∼P (s′|s,a′)[V (s ′)]. Then the expectation of V π(s) under distribution d(s) satisfies: Es∼d(s)V̂ π(s) ≤Es∼d(s)(V π(s)) + Es∼d(s)(I − γPπ)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| −αEs∼d(s)(I − γPπ)−1[ d(s) du(s) − 1])︸ ︷︷ ︸ >0 (23) When α ≥ Es∼d(s)Ea∼π(a|s) Cr,t,δRmax (1−γ) √ |D(s,a)| Es∼d(s)[ d(s) du(s) −1]) , Es∼d(s)V̂ π(s) ≤ Es∼d(s)(V π(s)). Proof of Theorem 2: The expectation of V π(s) under distribution d(s) satisfies: Es∼du(s)V̂ π(s) ≤Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| − αEs∼du(s)(I − γP π)−1[ d(s) du(s) − 1]) (24) Noticed that the last term:∑ s∼du(s) ( df (s) du(s) − 1) = ∑ s du(s)( df (s) du(s) − 1) = ∑ s df (s)− ∑ s du(s) = 0 (25) We obtain that: Es∼du(s)V̂ π(s) ≤ Es∼du(s)(V π(s)) + Es∼du(s)(I − γP π)−1Ea∼π(a|s) Cr,t,δRmax (1− γ) √ |D(s, a)| (26) Proof of Theorem 3: Recall that the expression of the V-function iterate is given by: V̂ k+1(s) = Bπ k V̂ k(s)− α[ d(s) du(s) − 1]∀s, k (27) Now the expectation of V π(s) under distribution du(s) is given by: Es∼du(s)V̂ k+1(s) = Es∼du(s) [ Bπ k V̂ k(s)− α[ d(s) du(s) − 1] ] = Es∼du(s)B πk V̂ k(s) (28) The expectation of V π(s) under distribution d(s) is given by: Es∼d(s)V̂ k+1(s) = Es∼d(s)Bπ k V̂ k(s)−α[ d(s) du(s) −1] = Es∼d(s)Bπ k V̂ k(s)−αEs∼d(s)[ d(s) du(s) −1] (29) Thus we can show that: Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) = Es∼du(s)B πk V̂ k(s)− Es∼d(s)Bπ k V̂ k(s) + αEs∼d(s)[ d(s) du(s) − 1] = Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s)− Es∼d(s)[Bπ k (V̂ k − V k)] + Es∼du(s)[B πk(V̂ k − V k)] + αEs∼d(s)[ d(s) du(s) − 1] (30) By choosing α: α > Es∼d(s)[Bπ k (V̂ k − V k)]− Es∼du(s)[Bπ k (V̂ k − V k)] Es∼d(s)[ d(s) du(s) − 1] (31) We have Es∼du(s)V̂ k+1(s)− Es∼d(s)V̂ k+1(s) > Es∼du(s)V k+1(s)− Es∼d(s)V k+1(s) hold. Proof of Theorem 4: V̂ is obtained by solving the recursive Bellman fixed point equation in the empirical MDP, with an altered reward, r(s, a) − α[ d(s)du(s) − 1], hence the optimal policy π ∗(a|s) obtained by optimizing the value under Eq. 4. Proof of Theorem 5: The proof of this statement is divided into two parts. We first relates the return of π∗ in the empirical MDP M̂ with the return of πβ , we can get: J(π∗, M̂)− α 1 1− γ Es∼dπ∗ M̂ (s)[ d(s) du(s) − 1] ≥ J(πβ , M̂)− 0 = J(πβ , M̂) (32) The next step is to bound the difference between J(πβ , M̂) and J(πβ ,M) and the difference between J(π∗, M̂) and J(π∗,M). We quote a useful lemma from Kumar et al. (2020) (Lemma D.4.1): Lemma 3. For any MDPM , an empirical MDP M̂ generated by sampling actions according to the behavior policy πβ and a given policy π, |J(π, M̂)−J(π,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π(a|s)( π(a|s) πβ(a|s) )] (33) Setting π in the above lemma as πβ , we get: |J(πβ , M̂)− J(πβ ,M)| ≤ ( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] (34) , given that √ Ea∼π∗(a|s)[ π∗(a|s) πβ(a|s) ] is a pointwise upper bound of √ Ea∼πβ(a|s)[ πβ(a|s) πβ(a|s) ](Kumar et al. (2020)). Thus we get, J(π∗, M̂) ≥ J(πβ , M̂)− 2( Cr,δ 1− γ + γRmaxCT,δ (1− γ)2 )Es∼dπ∗ M̂ (s)[ √ |A|√ |D(s)| √ Ea∼π∗(a|s)( π∗(a|s) πβ(a|s) )] + α 1 1− γ Es∼dπ M̂ (s)[ d(s) du(s) − 1] (35) , which completes the proof. Here, the second term is sampling error which occurs due to mismatch of M̂ and M ; the third term denotes the increase in policy performance due to CSVE in M̂ . Note that when the first term is small, the smaller value of α are able to provide an improvement compared to the behavior policy. B CSVE ALGORITHM Now we put all in section 4 together and describe the practical deep offline reinforcement learning algorithm. In particular, the dynamic model model, value functions and policy are all parameterized with deep neural networks and trained via stochastic gradient decent methods. The pseudo code is given in Alg. 1. Algorithm 1: CSVE Algorithm Input : Data D = {(s, a, r, s′)} Parameters: Qθ, Vψ , πϕ, Qθ, Mν Hyperparameters: α, λ, learning rates ηθ, ηψ, ηϕ, ω begin // Train transition model with the static dataset D 1 Mν ← train(D); // Train the conservative value and policy functions 2 Initialize function parameters θ0, ψ0, ϕ0, θ0 = θ0; 3 foreach step k = 1→ N do 4 ψk ← ψk−1 − ηψ∇ψLπV (Vψ; Q̂θk); 5 θk ← θk−1 − ηθ∇θLπQ(Qθ; V̂ψk); 6 ϕk ← ϕk−1 − ηϕ∇ϕL+π (πϕ); 7 θk ← ωθk−1 + (1− ω)θk; C IMPLEMENTATION DETAIL We implement our method based on an offline deep reinforcement learning library d3rlpy (Seno & Imai, 2021). The code is available at https://github.com/iclr20234089/code4098. The detailed hyper-parameters are provided in Table 3 D EXTENDED EXPERIMENTAL RESULTS D.1 MORE EXPERIMENTS ON HYPER-PARAMETERS EFFECT We also investigated λ values of {0.0, 0.1, 0.5, 1.0} in the medium tasks. The results are shown in Fig. 4. D.2 COMPARISON WITH PESSIMISM ON Q We implement an ablation version of our method–penalty-Q, which directlly penalize the value of state action pairs. Specifically, we change the critic loss function into : Q̂k+1 ← argmin Q LπQ(Q; Q̂ k) = α ( Es∼D,a′∼π(·|s)[Q(s, a′)]− Es,a∼D[Q(s, a)] ) + Es,a,s′∼D [( r(s, a) + γQ̂k+1(s′, a′)−Q(s, a) )2] (36) We use the same policy extraction method and test this method on the medium-task, in which the data is collected using a medium-performed policy. In all the three tasks, the performance of the penalty-Q are worse than the the original implementation, the penalty-V counterpart. When penalty is on the state-action pair, as illustrated by our theoretical discussion, the value of the evaluated Q value tends to pointwise lower bounds the true Q value, which results in a more conservative and thus worse policy. While when we penalize V, the estimated value function only bounds the expectation of the true V function, which results in a more flexible and well-performed policy. D.3 RELATIONSHIP BETWEEN MODEL BIAS AND FINAL PERFORMANCE As stated in the main paper, compared to normal model-based offline RL algorithms, CSVE is insensitive to model biases. To understand this quantitatively, now we investigate the effect of model biases to the performance. We use the the dynamic model’s average L2 error on transition prediction as the surrogate of model biases. As shown in Fig. 4, in CSVE, the model bias has very little effect to RL performance. Particularly, for halfcheetah there is observed effect of model errors to scores, while in hopper and walker2d with increasing errors, the scores have a slight downward trend where the decrease is relatively very small. D.4 REPRODUCTION OF COMBO In the main body of this paper, our results of COMBO adopt the results presented in literature (Rigter et al., 2022). Our goal here is to look into more details of COMBO’s asymptotic performance evaluated during training. For comparison fairness, we adopt the official COMBO code provided by author, and rerun to evaluate with the medium dataset of D4RL mujoco v2. Fig. 5 shows the asymptotic performance until 1000 epochs, in which the scores have been normalized with corresponding SAC performance. We found that in both hopper and walker2d, the scores show dramatic fluctuations. The average scores of last 10 epochs for halfcheetah, hopper and walker2d are 71.7, 65.3 and -0.26 in respect. Besides, we found even in D4RL v0 dataset, COMBO’s behaviours are similar. 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e halfcheetah_v2: Score of Return Average 0 200 400 600 800 1000 0 10 20 30 40 50 60 70 Sc or e hopper_v2: Score of Return Average 0 200 400 600 800 1000 2 0 2 4 6 8 10 Sc or e walker2d_v2: Score of Return Average Figure 5: Return of COMBO on D4RL mujoco v2 tasks D.5 EFFECT OF EXPLORATION NEAR DATASET DISTRIBUTIONS As discussed in Section 3.1 and 4.2, proper choice of exploration on the distribution (d) beyond data (du) should help policy improvement. The factor λ in Eq. 12 controls the trade-off on such ’bonus’ exploration and complying the data-implied behaviour policy. Let us take the medium-replay type of datasets to analyze its effect. In the experiments, with fixed β = 0.1, we investigate λ values of {0.0, 0.5, 1.0, 3.0}. As shown in the upper figures in Fig. 6, λ shows obvious effect to policy performance and variances during training. In general, there is a value under which increasing λ leads to performance improvement, while above which further increasing λ hurts performance. For example, with λ = 3.0 in hopper-medium-replay task and walker2d-medium-replay task, the performance get worse than with smaller λ values. The value of λ is task-specific, and we find that its effect is highly related to the loss in Eq. 11 which can be observed by comparing bottom and upper figures in Fig. 6. Thus, in practice, we can choose proper λ according to the above loss without online interaction.
1. What is the focus and contribution of the paper on reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis and originality? 3. What are the weaknesses of the paper, especially regarding its comparison with other methods and lack of interpretability? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes an algorithm called CSVE, which imposes penalization on states rather than state actions used in CQL. By doing such, we may benefit from learning a better conservative value since learning a conservative Q-value over joint state-action space is more challenging. The paper provides theoretical results that are improved from that of CQL. Although it requires learning a model to learn, the experiment results suggest that CCSVE overall performs better than CQL. Strengths And Weaknesses Strength Theoretically well supported Quite an original idea and algorithm design Descent performance Weakness Requires model: When compared to other model-free methods, CSVE requires estimating dynamics as well, so it may not be very fair to compare to other model-free algorithms. Since the authors argue that the proposed method is not very sensitive to model bias, it would be also interesting to see how CSVE is affected by varying model bias. Not enough interpretation on why the suggested algorithm is good: It is kind of vague in the paper why suggested algorithm works well when compared to CQL. Is it really easy to learn pessimistic V value instead of Q? Can it be shown as an example? Or is V-learning algorithm is just better than Q-learning in these domains, even in the online setting? It is hard to see the clear reason from the paper. Clarity, Quality, Novelty And Reproducibility Both the technical and presentation quality of the paper is good, and the clarity is also OK. However, there are some typos in the paper. The algorithm suggested in the paper is also quite original.
ICLR
Title DiscoBAX - Discovery of optimal intervention sets in genomic experiment design Abstract The discovery of therapeutics to treat genetically-driven pathologies relies on identifying genes involved in the underlying disease mechanism. With billions of potential hypotheses to test, an exhaustive exploration of the entire space of potential interventions is impossible in practice. Sample-efficient methods based on active learning or Bayesian optimization bear the promise of identifying targets of interest using as few experiments as possible. However, genomic perturbation experiments typically rely on proxy outcomes measured in biological model systems that may not completely correlate with the results of interventions in humans. In practical experiment design, one aims to find a set of interventions that maximally move a target phenotype via a diverse mechanism set to reduce the risk of failure in future stages of trials. To that end, we introduce DiscoBAX — a sample-efficient algorithm for genomic intervention discovery that maximizes the desired movement of a phenotype while covering a diverse set of underlying mechanisms. We provide theoretical guarantees on the optimality of the approach under standard assumptions, conduct extensive experiments in synthetic and realworld settings relevant to genomic discovery, and demonstrate that DiscoBax outperforms state-of-the-art active learning and Bayesian optimization methods in this task. Better methods for selecting effective and diverse perturbations in biological systems could enable researchers to discover novel therapeutics for many genetically-driven diseases. 1 INTRODUCTION Genomic experiments probing the function of genes under realistic cellular conditions are the cornerstone of modern early-stage drug target discovery and validation; moreover, they are used to identify effective modulators of one or more disease-relevant cellular processes. These experiments, for example using Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) (Jehuda et al., 2018) perturbations, are both time and resource-intensive (Dickson & Gagnon, 2004; 2009; DiMasi et al., 2016; Berdigaliyev & Aljofan, 2020). Therefore, an exhaustive search of the billions of potential experimental protocols covering all possible experimental conditions, cell states, cell types, and perturbations (Trapnell, 2015; Hasin et al., 2017; Worzfeld et al., 2017; Chappell et al., 2018; MacLean et al., 2018; Chappell et al., 2018) is infeasible even for the world’s largest biomedical research institutes. Furthermore, to mitigate the chances of failure in subsequent stages of the drug design pipeline, it is desirable for the subset of precursors selected in the target identification stage to operate on diverse underlying biological mechanisms (Nica et al., 2022). That way, if a promising candidate based on in-vitro experiments triggers unexpected issues when tested in-vivo (e.g., undesirable side effects), other lead precursors relying on different pathways might be suitable replacements that are not subject to the same issues. Mathematically, finding a diverse set of precursors corresponds to identifying and sampling from the different modes of the black-box objective function mapping intervention representations to the corresponding effects on the disease phenotype (§ 2). Existing machine learning methods for iterative experimental design (e.g., active learning, Bayesian optimization) have the potential to aid in efficiently exploring this vast biological intervention space. However, to our knowledge, there is no method geared toward identifying the modes of the underlying black-box objective function to identify candidate interventions that are both effective and diverse (§ 6). To this end, we introduce DiscoBAX - a sample-efficient Bayesian Algorithm eXecution (BAX) method for discovering genomic intervention sets with both high expected change in the target phe- notype and high diversity to maximize chances of success in the following stages of drug development (Figure 1), which we formalize as set-valued maximization problem (Equation 4). After providing theoretical guarantees on the optimality of the presented approach under standard conditions, we perform a comprehensive experimental evaluation in both synthetic and real-world datasets. The experiments show that DiscoBAX outperforms existing state-of-the-art active learning and Bayesian optimization methods in designing genomic experiments that maximize the yield of findings that could lead to the discovery of new potentially treatable disease mechanisms. Our contributions are as follows: • We formalize the gene target identification problem (§ 3) and discuss limitations of existing methods in addressing this problem (§ 6). • We develop DiscoBAX - a sample-efficient BAX method for maximizing the rate of significant discoveries per experiment while simultaneously probing for a wide range of diverse mechanisms during a genomic experiment campaign (§ 4). • We provide theoretical guarantees that substantiate the optimality of DiscoBAX under standard assumptions (§ 4 and Appendix A). • We conduct a comprehensive experimental evaluation covering both synthetic as well as real-world experimental design tasks that demonstrate that DiscoBAX outperforms existing state-of-the-art methods for experimental design in this setting (§ 5). 2 BACKGROUND AND NOTATION Genomic experimentation is an early stage in drug discovery where geneticists assess the effect of genomic interventions on moving a set of disease-relevant phenotypes to determine suitable drug targets. In an abstract language, we assume a black-box function, f : G → R, that maps each gene, g ∈ G, to the value, f(g), corresponding to the magnitude of phenotypic change under gene knock out. The set, G, is finite, |G| = m < ∞, because there are a limited number of protein-encoding genes in the human genome (≈ 20, 000) (Pertea et al., 2018), and is formalizable by either the set of integers or one-hot vectors with dimension m. However, biologically informed embeddings, X : G → X , are often preferred to represent genes for their potential to capture genetic, functional relationships. We assume that gene embeddings, X(g) = x ∈ X ⊆ Rd, are d-dimensional variables, with m distinct members, |X | = m, thus, we use f(g) and f(x) interchangeably. In drug development, a candidate target must meet several criteria to proceed to subsequent stages in the development pipeline. For example, engaging the target – down- or up-regulating the gene – must move the phenotype significantly in the desired direction. Such genes are called “top-movers” of the phenotype. We can define the K top-movers for a given phenotype as members of the set,X = {x1,x2, . . . ,xm}, corresponding to the K largest values of {f(x1), f(x2), . . . , f(xm)}. However, each evaluation of the phenotype change, f , requires a CRISPR-Cas9 knockout experiment in the lab, which makes exhaustive experimentation infeasible even for the most resourceful institutions. Hence in practice, the experimentation budget is limited to T ≪ m experiments. Instead of choosing the K top-movers (requiring phenotype change knowledge, f(x), for all inputs x ∈ X ), a more practical approach is to form the subset, Xc ⊆ X , of genes that when knocked out lead to a change in the phenotype, f(x), larger than a selected threshold value, c, i.e. Xc := {x ∈ X : f(x) ≥ c}. Bayesian Algorithm Execution (BAX), proposed by Neiswanger et al. (2021), is a method to estimate the output, OA := OA(f), of an algorithm, A, run on a function, f , by evaluating the function on a budgeted set of inputs, {xi}Ti=1 ∈ X . Estimating a computable property is done by positing a probabilistic model for f for estimating OA. Data is acquired by searching for the value x ∈ X that maximizes the mutual information, I(Yx;OA | Dt), between the function output, Yx, and the algorithm output, OA. BAX assumes that functional output instances, yx, of the function, f , can be observed for each acquired x. The acquisition of data is sequential, where the information gain maximization procedure leads to a dataset of observations, Dt := {(xi, yxi)}t−1i=1 , at step t ∈ [T ]. BAX can be used in conjunction with a number of algorithms, such as determining the superlevel set (i.e. Xc), computing integrals, or finding local optima of f . Given that genomic experimentation seeks to find a diverse set of genes corresponding to the modes of f , the BAX framework is well suited to our task. Concretely, BAX acquisition functions select points by maximizing the expected information gain (EIG) obtained from each point about the output of the algorithm. Crucial to the applicability of BAX to our problem setting is the tractability of accurate approximators of the EIG for algorithms which, like the one we will propose, return a subset of their inputs. The exact computation of the EIG for arbitrary algorithms is not generally tractable; however, Neiswanger et al. (2021) present an approximation that only requires the computation of the entropy of the distribution over function values conditioned on algorithm outputs. EIGvt (x,Dt) = H(fip(x)|Dt)− Ep(S|Dt)[H(fip(x)|S,Dt)]. (1) When the model P is a Gaussian Process, both of these quantities are straightforward to compute: the first is the entropy of the GP’s predictive distribution at x, and we can estimate the second by conditioning a posterior on the values of elements in the set S. Monte Carlo approximation of this quantity is possible when the model P does not permit a closed form. 3 PROBLEM SETTING A primary challenge in the drug discovery pipeline is the discrepancy in outcomes between in vitro experimental data and in vivo diseases. Where In vitro experimental data can quantify the effect of a gene knockout on a specific aspect of a cellular phenotype in a petri dish, in vivo interactions between the drug and the organism may lead to weaker effect sizes or toxicity. The drug discovery pipeline consists of stages that start by testing a set of candidate interventions and then procedes by selecting a subset of promising candidates to pass on for further development. For example, one might test a broad range of gene knockouts on cell cultures and then select a subset to evaluate in animal models. These trials can be expensive, so it is desirable to weed out potentially ineffective or toxic candidates before this phase. To do so, researchers can leverage heuristic score functions that predict the ”drug-like-ness” or likelihood of toxicity of a compound (Jiménez-Luna et al., 2020). Considering a diverse set of candidate interventions, where each intervention applies to a different mechanism in the disease phenotype, is also of use because it increases the likelihood of at least one candidate succeeding in the subsequent phase. We formalize this problem as an optimization problem where the optimizer has access to a measurement correlated with the quantity of interest; however, it is noise augmented to emulate the primary objective function. We formalize our search space (i.e., the set of available genes, though in principle this could be any set) G = {g1, . . . , gm}, for which we have some phenotype measurement fip. We will primarily refer to fip as a function from features to phenotype changes, but it is equivalent to expressing fip as a function on genes G. The subscript ‘ip’ stands for intermediate phenotype as it is not the actual clinical measurement caused by the gene knockout. Instead, it is a measurement known to correlate with a disease pathology and is tractable in the lab setting (see Appendix B for detailed formalization). In this paper, we will assume the phenotype change is a real number fip(x) ∈ R; however, given suitable modeling assumptions, it is possible to extend our approach to vector-valued phenotype readouts. We also define a function called disease outcome, fout, which is composed of fip and factors outside the biological pathway, such as toxicity of a molecule that engages with a target gene. The noise component, η, encapsulates all these extra factors. We consider two tractable formulations of the relationship between the disease outcome, fout, and the in vitro phenotype, fip. 1. Multiplicative Bernoulli noise: fout(x; η) = fip(x)η(x) (2) where η(x) ∈ {0, 1},∀x ∈ G, and η is sampled from a Gaussian process classification model. This setting presents a simplified model of drug toxicity: η corresponds to a binary indicator of whether or not the drug is revealed to exhibit unwanted side effects in future trials. The multiplicative noise model assumes that the downstream performance of an intervention is monotone with respect to its effect on the phenotype, conditional on the compound not exhibiting toxicity in future trials. In our experiments, we assume η exhibits correlation structure over inputs corresponding to a GP classification model, and construct the kernel KX of this GP to depend on some notion of distance in the embedding space X . 2. Additive Gaussian noise: fout(x; η) = fip(x) + η(x) η ∼ GP(0,KX ) (3) where η : G → R is drawn from a Gaussian process model with kernel KX . In this case, we assume that the unforeseen effects of the input x are sufficiently numerous to resemble a Gaussian perturbation of the measured in vitro phenotype fip(x). Notice that in the above models, noise is an umbrella term for everything that affects the fitness of a target but is not part of the biological pathway from the gene to the phenotype change. Therefore, the choice of noise distribution and how it affects the outcome is a modelling assumption that is intended to capture coarse inductive biases known to the researcher. We additionally seek out a set of interventions S ⊂ G of some fixed size |S| = k whose elements cause the maximum expected change (for some noise distribution) in the disease outcome. In other words, we seek an intervention that best moves the disease phenotype, which will be the best candidate drug. This goal is distinct from either sampling the super-level-sets of fip or finding the set S with the best average performance. Instead, we explicitly seek to identify a set of points whose toxicity or unintended side effects will be minimally correlated, maximizing the odds that at least one will succeed in the subsequent trials. We thus obtain a set-valued maximization problem max S⊆X Eη [ max x∈S fout(x; η) ] . (4) This compact formula is critical to attain our overarching objective: identifying interventions with both a large impact on the phenotype of interest and with high diversity to increase the chance of success of some of them in the subsequent steps of the drug discovery pipeline. An illustrative example is provided in Figure 6 in the Appendix to provide further intuition into this formula. The general formulation of this problem is NP-hard (Goel et al., 2010); therefore, we propose a tractable algorithm that provides a constant-factor approximation of the optimal solution by leveraging the submodular structure of the objective under suitable modeling assumptions. Given such an algorithm, our task is the active learning problem of optimally querying the function, fip, given a limited number of trials, T , to accurately estimate the algorithm’s output on the ground-truth dataset. Importantly, this formulation allows us to decouple modeling the measured phenotype, fip, from modeling the noise η. For example, we might make the modeling assumption that we sample fip from a GP with some kernel k1 and that η is a Bernoulli random variable indicating the safety of the compound. 4 METHOD Various methods exist for efficiently optimizing black-box functions; however, our problem setting violates several assumptions underlying these approaches. In particular, while we assume access to intermediate readouts fip, the actual optimization target of interest fout is not observable. Further, we seek to find a set of interventions that maximize its expected value under some modeling assumptions. These two properties render a broad range of prior art inapplicable. Active sampling methods do not prioritize high-value regions of the input space. Bayesian optimization methods assume access to the ground-truth function outputs (or a noisy observation thereof). And Bayesian algorithm execution approaches based on level-set sampling may not sufficiently decorrelate the hidden noise in the outcome. We propose an intervention set selection algorithm in a Bayesian algorithm execution procedure that leverages the modeling assumptions we characterize in the previous section. This method, Subset Discovery via Bayesian Algorithm Execution (DiscoBAX), consists of two distinct parts. (1) a subset-selection algorithm obtaining a 1− 1/e-factor approximation of the set that optimizes equation 3, and (2) an outer BAX loop that queries the phenotype readings to maximize the information gain about the output of this algorithm. In Section 4.1, we present the idealized form of DiscoBAX and show that it attains an approximately optimal solution. Our approach is easily adaptable to incorporate approximate posterior sampling methods, enabling its use with deep neural networks on high-dimensional datasets. We outline this practical implementation in Section 4.2. 4.1 ALGORITHM Subset maximization: we first address the problem of identifying a subset S ⊂ X which maximizes the value Eη[maxx∈S fout(x; η)] As mentioned previously, the exact maximization of this objective is intractable. To construct a tractable approximation, we propose a submodular surrogate objective, under which the value of an intervention is lower-bounded by zero f∗out(x; η) = max(fout(x; η), 0). This choice is motivated by the intuition that any intervention with a negative expected value on the phenotype is equally useless as it will not be considered in later experiment iterations, and so we do not need to distinguish between harmful interventions. The resulting function f(S) = Eη[maxx∈S f∗out(x; η)] will be submodular, and thus Algorithm 1, the greedy algorithm, will provide a 1− 1/e approximation of the optimal solution. Observation 1. The score function f : P(G)→ R defined by f(S) = Eη [ max x∈S ( max(0, fout(x; η) )] (5) is submodular. We provide proof of this result in Appendix A. In practice, we can estimate the expected value in this objective using Monte Carlo (MC) samples over the noise distribution η. Where MC sampling is too expensive, a heuristic that uses a threshold to remove points whose values under η are too highly correlated can also obtain comparable results with a reduced computational burden. Algorithm 1 SubsetSelect (Multiplicative Noise) Require: integer k > 0, set X , distribution P (η), sampled f̂ip : X → R S ← ∅ f̂out(x; η) := f̂ip(x)η(x) for i < k do S ← S ∪ {argmax x∈X\S Eη[ max y∈S∪{x} f̂out(x; η)]} end for return S Algorithm 2 DiscoBAX Require: finite sample set X , budget T , Monte Carlo parameter ℓ ∈ N D ← ∅ for i < T do sample {f̂ip}ℓj=1 ∼ P (fip|D) Sj ← SubsetSelect(f̂ip,j),∀j = 1, . . . , ℓ xi ← argmaxx∈X EIGv(x, Sℓj=1) query fip(xi) D = D ∪ {(xi, fip(xi)} end for return D Active sampling: because we do not assume prior knowledge of the phenotype function fip, we require a means of selecting potential interventions for querying its value at a specified input x. In practice, running these experiments may incur a cost, and so it is desirable to minimize the number of queries necessary to obtain an accurate estimate of the optimal intervention set. BAX (Neiswanger et al., 2021) presents an effective active sampling approach to approximate the output of an algorithm using a minimal number of queries to the dataset of interest. In our setting, this allows us to approximate the output of Algorithm 1 over the set (X , fip(X )) without incurring the cost of evaluating the effect of every knockout intervention in G. Concretely, this procedure takes as input some probabilistic model P which defines a distribution over phenotype readings fip conditioned on the data Dt seen so far and from which it is possible to draw samples. A remark on the efficiency of subset maximization & active sampling— It has to be emphasized that subset selection is a function called within each active sampling cycle. Hence, the above observation about submodularity refers specifically to Algorithm 1 rather than its incorporation in Algorithm 2. If sample efficiency is not a concern this algorithm could be run on the set of all inputs and provide the exact solution. We outline this procedure in Algorithm 2, and refer to Section 2 for additional details. In the batch acquisition setting, we form batches of size B at each cycle by selecting the B points with the highest EIG values. 4.2 PRACTICAL IMPLEMENTATION IN HIGH DIMENSIONS When working with high-dimensional input features, we typically leverage Bayesian Neural Networks in lieu of Gaussian Processes. We sample from the parameter distribution via Monte Carlo dropout (MCD) (Gal & Ghahramani, 2016), and rely on Monte Carlo simulation to estimate the quantities introduced in Algorithm 2. In particular, the entropy of the posterior distribution is obtained as follows: H(yx|Dt) = Ep(yx|Dt) [log p(yx|Dt)] ∼ 1 M M∑ s=1 log p(ysx|Dt, fs) (6) where the samples {ysx = fs(x)}Mi=1 are obtained by sampling from the distribution over model parameters with MCD to obtain the parameter samples {fs}Mi=1. 5 EXPERIMENTS In the experimental evaluation of DiscoBAX, we specifically seek to answer the following questions: 1) Does DiscoBAX allow us to reach a better trade-off between recovery of the top interventions and their diversity (Table 1 and 2)? 2) Is the method sample-efficient, i.e., identifies global optima in fewer experiments relative to random sampling or naive optimization baselines (Figure 3 and 5)? 3) Is the performance of DiscoBAX sensitive to various hyperparameter choices (Appendix D.3)? To address these questions, we first focus on experiments involving synthetic datasets (§ 5.1) in which we know the underlying ground truth objective function. We then conduct experiments across several large-scale experimental assays from the GeneDisco benchmark Mehrjou et al. (2021) that cover a diverse set of disease phenotypes. 5.1 SYNTHETIC DATA We begin with a concrete example to illustrate the distinction between the behavior DiscoBAX and existing methods. The dataset we consider is a one-dimensional regression task on a mixture-ofGaussians density function fmog. We construct fmog such that it exhibits several local optima at a variety of values, necessitating a careful trade-off between exploration and exploitation to optimize the DiscoBAX objective. Crucially, exploitation in this setting requires not only an accurate estimation of the global optimum but also an accurate estimation of the local optima. We provide evaluations on additional datasets in Appendix D.1. We consider the following baseline acquisition functions which select the optimal point x∗ to query at each iteration, letting µ(x) denote the posterior mean over fip(x) and σ2(x) its variance. We evaluate random sampling, a UCB-like acquisition function, BAX on super-level set and top-k algorithms, Thompson sampling, and uncertainty maximization baselines. Full details are provided in Appendix D.1. In Figure 2, we visualize the solutions found by each approach after 30 iterations. We further evaluate the score of each method, computed as Eη maxx∈S fip(x)η(x), where η is drawn from a Bernoulli distribution whose logits are determined by an affine transformation of a sample from a GP with zero mean and radial basis function covariance kernel. This construction ensures a high correlation between the values of nearby inputs and reward sets S whose elements are distant from each other. To select S, we use the learned posterior mean µ from each acquisition strategy as input to Algorithm 1 and set S to be equal to its output. We observe that most baselines over-exploit the high-value local optima, leading to inaccuracies on the lower optima. As a result, Algorithm 1 is unable to select the optimal subset elements from the lower-value modes and the model score suffers. The active sampling baseline yields a more uniform sampling distribution over inputs that results in a relatively uniform distribution of errors. While DiscoBAX does not perfectly estimate the value of the target function, its sampling strategy yields reasonably accurate estimates of all of the local optima. 5.2 GENEDISCO DATASET Datasets & baselines. The GeneDisco benchmark (Mehrjou et al., 2021) is comprised of five large-scale genome-wide CRISPR assays and compares the relative strengths of nine active learning algorithms (eg., Margin sampling, Coreset) for optimal experimental design. The objective of the different methods is to select the set of interventions (ie., genetic knockouts) with the largest impact on the corresponding disease phenotype. We include all existing baselines from the GeneDisco benchmark, as well as eight additional approaches: UCB, qUCB, qEI, qPOI, Thompson sampling, Top-K BAX, Levelset BAX, and DiscoBAX. Metrics & approach. We define the set of optimal interventions as the ones in the top percentile of the experimentally-measured phenotype (referred to as ‘Top-K interventions’). We use the TopK recall metric to assess the ability of the different methods to identify the best interventions. To quantify the diversity across the set of optimal interventions, we first cluster these interventions in a lower-dimensional subspace (details provided in Appendix C). We then measure the proportion of these clusters that are recalled (i.e., any of its members are selected) by a given algorithm over the different experiment cycles. The overall score of an approach is defined as the geometric mean between Top-K recall and the diversity metric. For all methods and datasets, we perform 25 consecutive batch acquisition cycles (with batch size 32). All experiments are repeated 10 times with different random seeds. Results & discussion. We observe that, across the different datasets, DiscoBAX enables to identify a more diverse set of optimal interventions relative to baselines (Table 1). It does so in a sample- efficient manner as it achieves higher diversity throughout the different acquisition cycles (Fig.3). Note that sample-efficiency is an empirical observation here not a theoretical property of the algorithm since it is possible to construct adversarial datasets where a BAX method will attain no better performance than random sampling. Interestingly, it tends to recall a higher share of optimal interventions on several assays as well, which may be the result of very steep extrema in the corresponding datasets. We also find the performance of DiscoBAX to be relatively insensitive to the choice of hyperparameters (Appendix D.3). Lastly, we note that when the input feature space (ie., the intervention representation) does not correlate much with the disease phenotype of interest, the model being learned tends to perform poorly and we observe no lift between the different methods and random sampling (eg., the SARS-CoV-2 assay from Zhu et al. (2021) – see Appendix D.2). 6 RELATED WORK Prior works have studied the application of genomic discovery and method development for diverse target generation. Bayesian optimization: Bayesian optimization (BO) is concerned with finding the global optimum of a function with the fewest number of function evaluations (Snoek et al., 2012; Shahriari et al., 2015). Since this target function is often expensive-to-evaluate, one typically uses a Gaussian process as a surrogate function (Srinivas et al.). The candidates for function evaluation are then determined through a so-called acquisition function, which is often expressed as the expected utility over the surrogate model. Typical choices include the expected improvement (Močkus, 1975, EI) and probability of improvement (Kushner, 1964, PI) as utility functions. Recent work includes variational approaches Song et al. (2022) which yield a tractable acquisition function whose limiting behavior is equivalent to PI. Prior work tried to obtain diversity in Bayesian optimization e.g. through a batch setting (Kirsch et al., 2019) or multi-objective optimization (Hernández-Lobato et al., 2016). Bayesian optimization has been applied to biological problem settings such as small molecule optimization (Korovina et al., 2020) or automatic chemical design (Griffiths & HernándezLobato, 2017). Optimal experiment design broadens the scope of Bayesian Optimization: rather than simply maximizing a parametric function, the task is to adaptively identify an optimal set of experiments to efficiently reach some goal (Robbins, 1952; Chernoff, 1959). Applying machine learning to automate hypothesis generation and testing goes back multiple decades (King et al., 2004). Optimal experiment design is amenable to Bayesian optimization (Greenhill et al., 2020) and reinforcement learning approaches (Kandasamy et al., 2019). Most related to our work is Bayesian Algorithm Execution (BAX) Neiswanger et al. (2021) that extends the goal of experiment design from only finding the maximum of a function to estimating more general properties such as level sets by computing the expected information gain (EIG) which is the mutual information between the evaluation of an input point and the statistics related that property. Active learning While many probabilistic models like Gaussian processes provide principled uncertainty estimates (Rasmussen, 2003), modern neural network architectures often rely on heuristics or only provide approximations approaches (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017). Active learning based approaches use the uncertainty estimates for maximizing expected information gains of model parameters (Houlsby et al., 2011). Recently, more and more approaches have used active learning based on model uncertainties of neural networks for biomedical applications. Bandits: The upper confidence bounds seen in BO originate in the bandit setting (Lai & Robbins, 1985), in which one can extend the widely-used UCB algorithm to Gaussian processes (Grünewälder et al., 2010; Srinivas et al.). While both bandits and BO seek to find the maximum of a function, the two problem settings leverage different notions of optimality. BO seeks to identify the argmax, whereas bandits seek to minimize the number of sub-optimal queries. Related to bandits and BO, some efforts are made to formulate active learning as a reinforcement learning problem (Slade & Branson, 2022; Casanova et al., 2020; Konyushkova et al., 2017; Pang et al., 2018). 7 CONCLUSION We have introduced a mathematical formalization of the drug discovery problem that captures the noise induced by moving from in vitro to in vivo experiments. We proposed a novel algorithm based on Bayesian Algorithm Execution and illustrated its utility on many illustrative synthetic datasets. We have further evaluated this class of methods against the real-world large-scale assays from the GeneDisco benchmark, where they help identify diverse top interventions better than existing baselines. Future work could see the extension of the current framework to explicitly account for the fact that experimental cycles happen in batches. Further, we assume in this work that distant representations of interventions implied different underlying biological mechanisms - a proper causal formulation of the problem would allow us to tell apart causally connected pathways more cleanly. Finally, it is typical practice to measure several potential intermediate phenotypes of interest to capture different aspects of interest, which requires an extension of our approach to the setting of multiple objectives. 8 REPRODUCIBILITY STATEMENT We clearly state our modelling assumptions throughout Sections 2 to 4. We provide proof for our theoretical claims in Appendix A. All experimental results reported in Section 5 and appendix D can be reproduced using the code available at: https://github.com/anonymous35780/ solaris-2023-iclr. Hyper-parameter sweeps for the BAX methods for GeneDisco are presented in Table 3.
1. What is the focus and contribution of the paper regarding subset selection in genomic intervention? 2. What are the strengths and weaknesses of the proposed DiscoBAX algorithm? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the problem motivation and validation, particularly in terms of potential overfitting?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work presents a new probabilistic algorithm ("DiscoBAX") for subset selection that aims to approximately optimize phenotype movement in genomic intervention and can be useful in drug discovery tasks according to the authors. The method identifies a set of interventions whose elements will trigger maximum expected change in the disease outcome with respect to for some noise distribution. Performance is assessed based on synthetic data as well as public benchmarking data, and the algorithm is shown to outperform alternative approaches. Strengths And Weaknesses Strengths: probabilistic approach is neat for uncertainty characterization practical combination of task, method derivation, and algorithm implementation Openly licensed source code for the algorithms available Weaknesses: Figure numberings are broken, text cites figure up to Fig.5, manuscript only has 3 figures, refs to Figs 3-4 are missing; not clear how to follow The problem motivation is hypothetical; it is not clear that "maximizing change" is what one would like to achieve in genomic drug discovery; on the other hand the algorithm is of interest regardless but perhaps best evaluated on its own right as a target optimization task. I am not sure if potential overfitting is sufficiently addressed in this work; explanation on this part could be strengthened Clarity, Quality, Novelty And Reproducibility Quality: Overall the paper is well written; the problem motivation and validation part (esp. overfitting; see comments) could be strengthened. Figure numbering is broken. Clarity: Text is easy to read and follow Some more intuitive descriptions of the algorithm could be added Figure numbering is broken. Originality: The work provides a new solution to a previously established problem and claims performance gains The combination of problem and algorithmic solution seems to be new Novelty And Reproducibility: I did not try to replicate the work but code is well organized and openly licensed, and seems robust. Some more guidance in the README landing page would be warranted.
ICLR
Title DiscoBAX - Discovery of optimal intervention sets in genomic experiment design Abstract The discovery of therapeutics to treat genetically-driven pathologies relies on identifying genes involved in the underlying disease mechanism. With billions of potential hypotheses to test, an exhaustive exploration of the entire space of potential interventions is impossible in practice. Sample-efficient methods based on active learning or Bayesian optimization bear the promise of identifying targets of interest using as few experiments as possible. However, genomic perturbation experiments typically rely on proxy outcomes measured in biological model systems that may not completely correlate with the results of interventions in humans. In practical experiment design, one aims to find a set of interventions that maximally move a target phenotype via a diverse mechanism set to reduce the risk of failure in future stages of trials. To that end, we introduce DiscoBAX — a sample-efficient algorithm for genomic intervention discovery that maximizes the desired movement of a phenotype while covering a diverse set of underlying mechanisms. We provide theoretical guarantees on the optimality of the approach under standard assumptions, conduct extensive experiments in synthetic and realworld settings relevant to genomic discovery, and demonstrate that DiscoBax outperforms state-of-the-art active learning and Bayesian optimization methods in this task. Better methods for selecting effective and diverse perturbations in biological systems could enable researchers to discover novel therapeutics for many genetically-driven diseases. 1 INTRODUCTION Genomic experiments probing the function of genes under realistic cellular conditions are the cornerstone of modern early-stage drug target discovery and validation; moreover, they are used to identify effective modulators of one or more disease-relevant cellular processes. These experiments, for example using Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) (Jehuda et al., 2018) perturbations, are both time and resource-intensive (Dickson & Gagnon, 2004; 2009; DiMasi et al., 2016; Berdigaliyev & Aljofan, 2020). Therefore, an exhaustive search of the billions of potential experimental protocols covering all possible experimental conditions, cell states, cell types, and perturbations (Trapnell, 2015; Hasin et al., 2017; Worzfeld et al., 2017; Chappell et al., 2018; MacLean et al., 2018; Chappell et al., 2018) is infeasible even for the world’s largest biomedical research institutes. Furthermore, to mitigate the chances of failure in subsequent stages of the drug design pipeline, it is desirable for the subset of precursors selected in the target identification stage to operate on diverse underlying biological mechanisms (Nica et al., 2022). That way, if a promising candidate based on in-vitro experiments triggers unexpected issues when tested in-vivo (e.g., undesirable side effects), other lead precursors relying on different pathways might be suitable replacements that are not subject to the same issues. Mathematically, finding a diverse set of precursors corresponds to identifying and sampling from the different modes of the black-box objective function mapping intervention representations to the corresponding effects on the disease phenotype (§ 2). Existing machine learning methods for iterative experimental design (e.g., active learning, Bayesian optimization) have the potential to aid in efficiently exploring this vast biological intervention space. However, to our knowledge, there is no method geared toward identifying the modes of the underlying black-box objective function to identify candidate interventions that are both effective and diverse (§ 6). To this end, we introduce DiscoBAX - a sample-efficient Bayesian Algorithm eXecution (BAX) method for discovering genomic intervention sets with both high expected change in the target phe- notype and high diversity to maximize chances of success in the following stages of drug development (Figure 1), which we formalize as set-valued maximization problem (Equation 4). After providing theoretical guarantees on the optimality of the presented approach under standard conditions, we perform a comprehensive experimental evaluation in both synthetic and real-world datasets. The experiments show that DiscoBAX outperforms existing state-of-the-art active learning and Bayesian optimization methods in designing genomic experiments that maximize the yield of findings that could lead to the discovery of new potentially treatable disease mechanisms. Our contributions are as follows: • We formalize the gene target identification problem (§ 3) and discuss limitations of existing methods in addressing this problem (§ 6). • We develop DiscoBAX - a sample-efficient BAX method for maximizing the rate of significant discoveries per experiment while simultaneously probing for a wide range of diverse mechanisms during a genomic experiment campaign (§ 4). • We provide theoretical guarantees that substantiate the optimality of DiscoBAX under standard assumptions (§ 4 and Appendix A). • We conduct a comprehensive experimental evaluation covering both synthetic as well as real-world experimental design tasks that demonstrate that DiscoBAX outperforms existing state-of-the-art methods for experimental design in this setting (§ 5). 2 BACKGROUND AND NOTATION Genomic experimentation is an early stage in drug discovery where geneticists assess the effect of genomic interventions on moving a set of disease-relevant phenotypes to determine suitable drug targets. In an abstract language, we assume a black-box function, f : G → R, that maps each gene, g ∈ G, to the value, f(g), corresponding to the magnitude of phenotypic change under gene knock out. The set, G, is finite, |G| = m < ∞, because there are a limited number of protein-encoding genes in the human genome (≈ 20, 000) (Pertea et al., 2018), and is formalizable by either the set of integers or one-hot vectors with dimension m. However, biologically informed embeddings, X : G → X , are often preferred to represent genes for their potential to capture genetic, functional relationships. We assume that gene embeddings, X(g) = x ∈ X ⊆ Rd, are d-dimensional variables, with m distinct members, |X | = m, thus, we use f(g) and f(x) interchangeably. In drug development, a candidate target must meet several criteria to proceed to subsequent stages in the development pipeline. For example, engaging the target – down- or up-regulating the gene – must move the phenotype significantly in the desired direction. Such genes are called “top-movers” of the phenotype. We can define the K top-movers for a given phenotype as members of the set,X = {x1,x2, . . . ,xm}, corresponding to the K largest values of {f(x1), f(x2), . . . , f(xm)}. However, each evaluation of the phenotype change, f , requires a CRISPR-Cas9 knockout experiment in the lab, which makes exhaustive experimentation infeasible even for the most resourceful institutions. Hence in practice, the experimentation budget is limited to T ≪ m experiments. Instead of choosing the K top-movers (requiring phenotype change knowledge, f(x), for all inputs x ∈ X ), a more practical approach is to form the subset, Xc ⊆ X , of genes that when knocked out lead to a change in the phenotype, f(x), larger than a selected threshold value, c, i.e. Xc := {x ∈ X : f(x) ≥ c}. Bayesian Algorithm Execution (BAX), proposed by Neiswanger et al. (2021), is a method to estimate the output, OA := OA(f), of an algorithm, A, run on a function, f , by evaluating the function on a budgeted set of inputs, {xi}Ti=1 ∈ X . Estimating a computable property is done by positing a probabilistic model for f for estimating OA. Data is acquired by searching for the value x ∈ X that maximizes the mutual information, I(Yx;OA | Dt), between the function output, Yx, and the algorithm output, OA. BAX assumes that functional output instances, yx, of the function, f , can be observed for each acquired x. The acquisition of data is sequential, where the information gain maximization procedure leads to a dataset of observations, Dt := {(xi, yxi)}t−1i=1 , at step t ∈ [T ]. BAX can be used in conjunction with a number of algorithms, such as determining the superlevel set (i.e. Xc), computing integrals, or finding local optima of f . Given that genomic experimentation seeks to find a diverse set of genes corresponding to the modes of f , the BAX framework is well suited to our task. Concretely, BAX acquisition functions select points by maximizing the expected information gain (EIG) obtained from each point about the output of the algorithm. Crucial to the applicability of BAX to our problem setting is the tractability of accurate approximators of the EIG for algorithms which, like the one we will propose, return a subset of their inputs. The exact computation of the EIG for arbitrary algorithms is not generally tractable; however, Neiswanger et al. (2021) present an approximation that only requires the computation of the entropy of the distribution over function values conditioned on algorithm outputs. EIGvt (x,Dt) = H(fip(x)|Dt)− Ep(S|Dt)[H(fip(x)|S,Dt)]. (1) When the model P is a Gaussian Process, both of these quantities are straightforward to compute: the first is the entropy of the GP’s predictive distribution at x, and we can estimate the second by conditioning a posterior on the values of elements in the set S. Monte Carlo approximation of this quantity is possible when the model P does not permit a closed form. 3 PROBLEM SETTING A primary challenge in the drug discovery pipeline is the discrepancy in outcomes between in vitro experimental data and in vivo diseases. Where In vitro experimental data can quantify the effect of a gene knockout on a specific aspect of a cellular phenotype in a petri dish, in vivo interactions between the drug and the organism may lead to weaker effect sizes or toxicity. The drug discovery pipeline consists of stages that start by testing a set of candidate interventions and then procedes by selecting a subset of promising candidates to pass on for further development. For example, one might test a broad range of gene knockouts on cell cultures and then select a subset to evaluate in animal models. These trials can be expensive, so it is desirable to weed out potentially ineffective or toxic candidates before this phase. To do so, researchers can leverage heuristic score functions that predict the ”drug-like-ness” or likelihood of toxicity of a compound (Jiménez-Luna et al., 2020). Considering a diverse set of candidate interventions, where each intervention applies to a different mechanism in the disease phenotype, is also of use because it increases the likelihood of at least one candidate succeeding in the subsequent phase. We formalize this problem as an optimization problem where the optimizer has access to a measurement correlated with the quantity of interest; however, it is noise augmented to emulate the primary objective function. We formalize our search space (i.e., the set of available genes, though in principle this could be any set) G = {g1, . . . , gm}, for which we have some phenotype measurement fip. We will primarily refer to fip as a function from features to phenotype changes, but it is equivalent to expressing fip as a function on genes G. The subscript ‘ip’ stands for intermediate phenotype as it is not the actual clinical measurement caused by the gene knockout. Instead, it is a measurement known to correlate with a disease pathology and is tractable in the lab setting (see Appendix B for detailed formalization). In this paper, we will assume the phenotype change is a real number fip(x) ∈ R; however, given suitable modeling assumptions, it is possible to extend our approach to vector-valued phenotype readouts. We also define a function called disease outcome, fout, which is composed of fip and factors outside the biological pathway, such as toxicity of a molecule that engages with a target gene. The noise component, η, encapsulates all these extra factors. We consider two tractable formulations of the relationship between the disease outcome, fout, and the in vitro phenotype, fip. 1. Multiplicative Bernoulli noise: fout(x; η) = fip(x)η(x) (2) where η(x) ∈ {0, 1},∀x ∈ G, and η is sampled from a Gaussian process classification model. This setting presents a simplified model of drug toxicity: η corresponds to a binary indicator of whether or not the drug is revealed to exhibit unwanted side effects in future trials. The multiplicative noise model assumes that the downstream performance of an intervention is monotone with respect to its effect on the phenotype, conditional on the compound not exhibiting toxicity in future trials. In our experiments, we assume η exhibits correlation structure over inputs corresponding to a GP classification model, and construct the kernel KX of this GP to depend on some notion of distance in the embedding space X . 2. Additive Gaussian noise: fout(x; η) = fip(x) + η(x) η ∼ GP(0,KX ) (3) where η : G → R is drawn from a Gaussian process model with kernel KX . In this case, we assume that the unforeseen effects of the input x are sufficiently numerous to resemble a Gaussian perturbation of the measured in vitro phenotype fip(x). Notice that in the above models, noise is an umbrella term for everything that affects the fitness of a target but is not part of the biological pathway from the gene to the phenotype change. Therefore, the choice of noise distribution and how it affects the outcome is a modelling assumption that is intended to capture coarse inductive biases known to the researcher. We additionally seek out a set of interventions S ⊂ G of some fixed size |S| = k whose elements cause the maximum expected change (for some noise distribution) in the disease outcome. In other words, we seek an intervention that best moves the disease phenotype, which will be the best candidate drug. This goal is distinct from either sampling the super-level-sets of fip or finding the set S with the best average performance. Instead, we explicitly seek to identify a set of points whose toxicity or unintended side effects will be minimally correlated, maximizing the odds that at least one will succeed in the subsequent trials. We thus obtain a set-valued maximization problem max S⊆X Eη [ max x∈S fout(x; η) ] . (4) This compact formula is critical to attain our overarching objective: identifying interventions with both a large impact on the phenotype of interest and with high diversity to increase the chance of success of some of them in the subsequent steps of the drug discovery pipeline. An illustrative example is provided in Figure 6 in the Appendix to provide further intuition into this formula. The general formulation of this problem is NP-hard (Goel et al., 2010); therefore, we propose a tractable algorithm that provides a constant-factor approximation of the optimal solution by leveraging the submodular structure of the objective under suitable modeling assumptions. Given such an algorithm, our task is the active learning problem of optimally querying the function, fip, given a limited number of trials, T , to accurately estimate the algorithm’s output on the ground-truth dataset. Importantly, this formulation allows us to decouple modeling the measured phenotype, fip, from modeling the noise η. For example, we might make the modeling assumption that we sample fip from a GP with some kernel k1 and that η is a Bernoulli random variable indicating the safety of the compound. 4 METHOD Various methods exist for efficiently optimizing black-box functions; however, our problem setting violates several assumptions underlying these approaches. In particular, while we assume access to intermediate readouts fip, the actual optimization target of interest fout is not observable. Further, we seek to find a set of interventions that maximize its expected value under some modeling assumptions. These two properties render a broad range of prior art inapplicable. Active sampling methods do not prioritize high-value regions of the input space. Bayesian optimization methods assume access to the ground-truth function outputs (or a noisy observation thereof). And Bayesian algorithm execution approaches based on level-set sampling may not sufficiently decorrelate the hidden noise in the outcome. We propose an intervention set selection algorithm in a Bayesian algorithm execution procedure that leverages the modeling assumptions we characterize in the previous section. This method, Subset Discovery via Bayesian Algorithm Execution (DiscoBAX), consists of two distinct parts. (1) a subset-selection algorithm obtaining a 1− 1/e-factor approximation of the set that optimizes equation 3, and (2) an outer BAX loop that queries the phenotype readings to maximize the information gain about the output of this algorithm. In Section 4.1, we present the idealized form of DiscoBAX and show that it attains an approximately optimal solution. Our approach is easily adaptable to incorporate approximate posterior sampling methods, enabling its use with deep neural networks on high-dimensional datasets. We outline this practical implementation in Section 4.2. 4.1 ALGORITHM Subset maximization: we first address the problem of identifying a subset S ⊂ X which maximizes the value Eη[maxx∈S fout(x; η)] As mentioned previously, the exact maximization of this objective is intractable. To construct a tractable approximation, we propose a submodular surrogate objective, under which the value of an intervention is lower-bounded by zero f∗out(x; η) = max(fout(x; η), 0). This choice is motivated by the intuition that any intervention with a negative expected value on the phenotype is equally useless as it will not be considered in later experiment iterations, and so we do not need to distinguish between harmful interventions. The resulting function f(S) = Eη[maxx∈S f∗out(x; η)] will be submodular, and thus Algorithm 1, the greedy algorithm, will provide a 1− 1/e approximation of the optimal solution. Observation 1. The score function f : P(G)→ R defined by f(S) = Eη [ max x∈S ( max(0, fout(x; η) )] (5) is submodular. We provide proof of this result in Appendix A. In practice, we can estimate the expected value in this objective using Monte Carlo (MC) samples over the noise distribution η. Where MC sampling is too expensive, a heuristic that uses a threshold to remove points whose values under η are too highly correlated can also obtain comparable results with a reduced computational burden. Algorithm 1 SubsetSelect (Multiplicative Noise) Require: integer k > 0, set X , distribution P (η), sampled f̂ip : X → R S ← ∅ f̂out(x; η) := f̂ip(x)η(x) for i < k do S ← S ∪ {argmax x∈X\S Eη[ max y∈S∪{x} f̂out(x; η)]} end for return S Algorithm 2 DiscoBAX Require: finite sample set X , budget T , Monte Carlo parameter ℓ ∈ N D ← ∅ for i < T do sample {f̂ip}ℓj=1 ∼ P (fip|D) Sj ← SubsetSelect(f̂ip,j),∀j = 1, . . . , ℓ xi ← argmaxx∈X EIGv(x, Sℓj=1) query fip(xi) D = D ∪ {(xi, fip(xi)} end for return D Active sampling: because we do not assume prior knowledge of the phenotype function fip, we require a means of selecting potential interventions for querying its value at a specified input x. In practice, running these experiments may incur a cost, and so it is desirable to minimize the number of queries necessary to obtain an accurate estimate of the optimal intervention set. BAX (Neiswanger et al., 2021) presents an effective active sampling approach to approximate the output of an algorithm using a minimal number of queries to the dataset of interest. In our setting, this allows us to approximate the output of Algorithm 1 over the set (X , fip(X )) without incurring the cost of evaluating the effect of every knockout intervention in G. Concretely, this procedure takes as input some probabilistic model P which defines a distribution over phenotype readings fip conditioned on the data Dt seen so far and from which it is possible to draw samples. A remark on the efficiency of subset maximization & active sampling— It has to be emphasized that subset selection is a function called within each active sampling cycle. Hence, the above observation about submodularity refers specifically to Algorithm 1 rather than its incorporation in Algorithm 2. If sample efficiency is not a concern this algorithm could be run on the set of all inputs and provide the exact solution. We outline this procedure in Algorithm 2, and refer to Section 2 for additional details. In the batch acquisition setting, we form batches of size B at each cycle by selecting the B points with the highest EIG values. 4.2 PRACTICAL IMPLEMENTATION IN HIGH DIMENSIONS When working with high-dimensional input features, we typically leverage Bayesian Neural Networks in lieu of Gaussian Processes. We sample from the parameter distribution via Monte Carlo dropout (MCD) (Gal & Ghahramani, 2016), and rely on Monte Carlo simulation to estimate the quantities introduced in Algorithm 2. In particular, the entropy of the posterior distribution is obtained as follows: H(yx|Dt) = Ep(yx|Dt) [log p(yx|Dt)] ∼ 1 M M∑ s=1 log p(ysx|Dt, fs) (6) where the samples {ysx = fs(x)}Mi=1 are obtained by sampling from the distribution over model parameters with MCD to obtain the parameter samples {fs}Mi=1. 5 EXPERIMENTS In the experimental evaluation of DiscoBAX, we specifically seek to answer the following questions: 1) Does DiscoBAX allow us to reach a better trade-off between recovery of the top interventions and their diversity (Table 1 and 2)? 2) Is the method sample-efficient, i.e., identifies global optima in fewer experiments relative to random sampling or naive optimization baselines (Figure 3 and 5)? 3) Is the performance of DiscoBAX sensitive to various hyperparameter choices (Appendix D.3)? To address these questions, we first focus on experiments involving synthetic datasets (§ 5.1) in which we know the underlying ground truth objective function. We then conduct experiments across several large-scale experimental assays from the GeneDisco benchmark Mehrjou et al. (2021) that cover a diverse set of disease phenotypes. 5.1 SYNTHETIC DATA We begin with a concrete example to illustrate the distinction between the behavior DiscoBAX and existing methods. The dataset we consider is a one-dimensional regression task on a mixture-ofGaussians density function fmog. We construct fmog such that it exhibits several local optima at a variety of values, necessitating a careful trade-off between exploration and exploitation to optimize the DiscoBAX objective. Crucially, exploitation in this setting requires not only an accurate estimation of the global optimum but also an accurate estimation of the local optima. We provide evaluations on additional datasets in Appendix D.1. We consider the following baseline acquisition functions which select the optimal point x∗ to query at each iteration, letting µ(x) denote the posterior mean over fip(x) and σ2(x) its variance. We evaluate random sampling, a UCB-like acquisition function, BAX on super-level set and top-k algorithms, Thompson sampling, and uncertainty maximization baselines. Full details are provided in Appendix D.1. In Figure 2, we visualize the solutions found by each approach after 30 iterations. We further evaluate the score of each method, computed as Eη maxx∈S fip(x)η(x), where η is drawn from a Bernoulli distribution whose logits are determined by an affine transformation of a sample from a GP with zero mean and radial basis function covariance kernel. This construction ensures a high correlation between the values of nearby inputs and reward sets S whose elements are distant from each other. To select S, we use the learned posterior mean µ from each acquisition strategy as input to Algorithm 1 and set S to be equal to its output. We observe that most baselines over-exploit the high-value local optima, leading to inaccuracies on the lower optima. As a result, Algorithm 1 is unable to select the optimal subset elements from the lower-value modes and the model score suffers. The active sampling baseline yields a more uniform sampling distribution over inputs that results in a relatively uniform distribution of errors. While DiscoBAX does not perfectly estimate the value of the target function, its sampling strategy yields reasonably accurate estimates of all of the local optima. 5.2 GENEDISCO DATASET Datasets & baselines. The GeneDisco benchmark (Mehrjou et al., 2021) is comprised of five large-scale genome-wide CRISPR assays and compares the relative strengths of nine active learning algorithms (eg., Margin sampling, Coreset) for optimal experimental design. The objective of the different methods is to select the set of interventions (ie., genetic knockouts) with the largest impact on the corresponding disease phenotype. We include all existing baselines from the GeneDisco benchmark, as well as eight additional approaches: UCB, qUCB, qEI, qPOI, Thompson sampling, Top-K BAX, Levelset BAX, and DiscoBAX. Metrics & approach. We define the set of optimal interventions as the ones in the top percentile of the experimentally-measured phenotype (referred to as ‘Top-K interventions’). We use the TopK recall metric to assess the ability of the different methods to identify the best interventions. To quantify the diversity across the set of optimal interventions, we first cluster these interventions in a lower-dimensional subspace (details provided in Appendix C). We then measure the proportion of these clusters that are recalled (i.e., any of its members are selected) by a given algorithm over the different experiment cycles. The overall score of an approach is defined as the geometric mean between Top-K recall and the diversity metric. For all methods and datasets, we perform 25 consecutive batch acquisition cycles (with batch size 32). All experiments are repeated 10 times with different random seeds. Results & discussion. We observe that, across the different datasets, DiscoBAX enables to identify a more diverse set of optimal interventions relative to baselines (Table 1). It does so in a sample- efficient manner as it achieves higher diversity throughout the different acquisition cycles (Fig.3). Note that sample-efficiency is an empirical observation here not a theoretical property of the algorithm since it is possible to construct adversarial datasets where a BAX method will attain no better performance than random sampling. Interestingly, it tends to recall a higher share of optimal interventions on several assays as well, which may be the result of very steep extrema in the corresponding datasets. We also find the performance of DiscoBAX to be relatively insensitive to the choice of hyperparameters (Appendix D.3). Lastly, we note that when the input feature space (ie., the intervention representation) does not correlate much with the disease phenotype of interest, the model being learned tends to perform poorly and we observe no lift between the different methods and random sampling (eg., the SARS-CoV-2 assay from Zhu et al. (2021) – see Appendix D.2). 6 RELATED WORK Prior works have studied the application of genomic discovery and method development for diverse target generation. Bayesian optimization: Bayesian optimization (BO) is concerned with finding the global optimum of a function with the fewest number of function evaluations (Snoek et al., 2012; Shahriari et al., 2015). Since this target function is often expensive-to-evaluate, one typically uses a Gaussian process as a surrogate function (Srinivas et al.). The candidates for function evaluation are then determined through a so-called acquisition function, which is often expressed as the expected utility over the surrogate model. Typical choices include the expected improvement (Močkus, 1975, EI) and probability of improvement (Kushner, 1964, PI) as utility functions. Recent work includes variational approaches Song et al. (2022) which yield a tractable acquisition function whose limiting behavior is equivalent to PI. Prior work tried to obtain diversity in Bayesian optimization e.g. through a batch setting (Kirsch et al., 2019) or multi-objective optimization (Hernández-Lobato et al., 2016). Bayesian optimization has been applied to biological problem settings such as small molecule optimization (Korovina et al., 2020) or automatic chemical design (Griffiths & HernándezLobato, 2017). Optimal experiment design broadens the scope of Bayesian Optimization: rather than simply maximizing a parametric function, the task is to adaptively identify an optimal set of experiments to efficiently reach some goal (Robbins, 1952; Chernoff, 1959). Applying machine learning to automate hypothesis generation and testing goes back multiple decades (King et al., 2004). Optimal experiment design is amenable to Bayesian optimization (Greenhill et al., 2020) and reinforcement learning approaches (Kandasamy et al., 2019). Most related to our work is Bayesian Algorithm Execution (BAX) Neiswanger et al. (2021) that extends the goal of experiment design from only finding the maximum of a function to estimating more general properties such as level sets by computing the expected information gain (EIG) which is the mutual information between the evaluation of an input point and the statistics related that property. Active learning While many probabilistic models like Gaussian processes provide principled uncertainty estimates (Rasmussen, 2003), modern neural network architectures often rely on heuristics or only provide approximations approaches (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017). Active learning based approaches use the uncertainty estimates for maximizing expected information gains of model parameters (Houlsby et al., 2011). Recently, more and more approaches have used active learning based on model uncertainties of neural networks for biomedical applications. Bandits: The upper confidence bounds seen in BO originate in the bandit setting (Lai & Robbins, 1985), in which one can extend the widely-used UCB algorithm to Gaussian processes (Grünewälder et al., 2010; Srinivas et al.). While both bandits and BO seek to find the maximum of a function, the two problem settings leverage different notions of optimality. BO seeks to identify the argmax, whereas bandits seek to minimize the number of sub-optimal queries. Related to bandits and BO, some efforts are made to formulate active learning as a reinforcement learning problem (Slade & Branson, 2022; Casanova et al., 2020; Konyushkova et al., 2017; Pang et al., 2018). 7 CONCLUSION We have introduced a mathematical formalization of the drug discovery problem that captures the noise induced by moving from in vitro to in vivo experiments. We proposed a novel algorithm based on Bayesian Algorithm Execution and illustrated its utility on many illustrative synthetic datasets. We have further evaluated this class of methods against the real-world large-scale assays from the GeneDisco benchmark, where they help identify diverse top interventions better than existing baselines. Future work could see the extension of the current framework to explicitly account for the fact that experimental cycles happen in batches. Further, we assume in this work that distant representations of interventions implied different underlying biological mechanisms - a proper causal formulation of the problem would allow us to tell apart causally connected pathways more cleanly. Finally, it is typical practice to measure several potential intermediate phenotypes of interest to capture different aspects of interest, which requires an extension of our approach to the setting of multiple objectives. 8 REPRODUCIBILITY STATEMENT We clearly state our modelling assumptions throughout Sections 2 to 4. We provide proof for our theoretical claims in Appendix A. All experimental results reported in Section 5 and appendix D can be reproduced using the code available at: https://github.com/anonymous35780/ solaris-2023-iclr. Hyper-parameter sweeps for the BAX methods for GeneDisco are presented in Table 3.
1. What is the focus of the paper regarding experiment design? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its formalization and exploration? 3. Do you have any concerns about the investigation of discoBAX's sample efficiency, sensitivity to hyperparameter settings, and comparison with other methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the use of the relu'd score function and its impact on the optimization submodular?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper considers the problem of designing an experiment where there are two stages: in the first (in vitro) stage our task is to (efficiently) design an experiment that will have good chance of success in the second (in vivo) stage. The authors formalize this elegantly in equation (3). The proposed solution is discoBAX: an algorithm that actively searches the in-vitro function for a set of optimal yet diverse points that have the best shot at having a successful outcome in the in vivo stage. The paper explores discoBAX on a synthetic and real-world dataset. Strengths And Weaknesses Strengths After spending some time with the paper, I came to appreciate the formalisation of the setting that is being worked in. I particularly enjoyed section 3 which laid out the problem. I appreciate the authors use of the appendices to keep long proofs out of the paper (the submodular proof for example) and also the clear presentation of the algorithms used. Posting the code anonymously gives me some confidence that the work is sound. Weaknesses the paper fails to deliver what is promised at the start of the experimental section - an investigation of discoBAX's sample efficiency, sensitivity to hyper-parameter settings, and comparison with the top-intervention method. As a result I feel that the paper is really rather unfinished! I am keen to understand more about discoBAX and am disappointed that the investigation is not deeper. The 'related work' section at the back does not cover BAX - which appear to be prior art? this section feels like filler, I would have preferred the space be used to investigate discoBAX further. I did not find a satisfactory discussion (or empirical evaluation) of the use of the relu'd score function, relative to the original objective. I get that this makes the optimization submodular, but have we lost anything? In which cases does this cause a problem? It took me some time to work out what was going on - I think more discussion around Figure 1 is needed, and perhaps some commentary on how this is connected to the invitro/in-vivo setting. I liked the clarity of eq 3, but this felt like a long time coming, and would have been better placed at the start of the paper perhaps? Clarity, Quality, Novelty And Reproducibility I have to give the authors top marks for reproducibility since the code is available. A problem with the paper is that it is hard to me to assess novelty. Although the authors do lay out their contributions at the beginning, it's not clear to me whether this paper or another is the source of the technical inventions. For example, the submodularity of the relu'd score function. A minor (clarity) point, but the figures are poor quality. I cannot read the axis labels when printed out. Unfortunately this gives the impression of a rushed paper.
ICLR
Title DiscoBAX - Discovery of optimal intervention sets in genomic experiment design Abstract The discovery of therapeutics to treat genetically-driven pathologies relies on identifying genes involved in the underlying disease mechanism. With billions of potential hypotheses to test, an exhaustive exploration of the entire space of potential interventions is impossible in practice. Sample-efficient methods based on active learning or Bayesian optimization bear the promise of identifying targets of interest using as few experiments as possible. However, genomic perturbation experiments typically rely on proxy outcomes measured in biological model systems that may not completely correlate with the results of interventions in humans. In practical experiment design, one aims to find a set of interventions that maximally move a target phenotype via a diverse mechanism set to reduce the risk of failure in future stages of trials. To that end, we introduce DiscoBAX — a sample-efficient algorithm for genomic intervention discovery that maximizes the desired movement of a phenotype while covering a diverse set of underlying mechanisms. We provide theoretical guarantees on the optimality of the approach under standard assumptions, conduct extensive experiments in synthetic and realworld settings relevant to genomic discovery, and demonstrate that DiscoBax outperforms state-of-the-art active learning and Bayesian optimization methods in this task. Better methods for selecting effective and diverse perturbations in biological systems could enable researchers to discover novel therapeutics for many genetically-driven diseases. 1 INTRODUCTION Genomic experiments probing the function of genes under realistic cellular conditions are the cornerstone of modern early-stage drug target discovery and validation; moreover, they are used to identify effective modulators of one or more disease-relevant cellular processes. These experiments, for example using Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) (Jehuda et al., 2018) perturbations, are both time and resource-intensive (Dickson & Gagnon, 2004; 2009; DiMasi et al., 2016; Berdigaliyev & Aljofan, 2020). Therefore, an exhaustive search of the billions of potential experimental protocols covering all possible experimental conditions, cell states, cell types, and perturbations (Trapnell, 2015; Hasin et al., 2017; Worzfeld et al., 2017; Chappell et al., 2018; MacLean et al., 2018; Chappell et al., 2018) is infeasible even for the world’s largest biomedical research institutes. Furthermore, to mitigate the chances of failure in subsequent stages of the drug design pipeline, it is desirable for the subset of precursors selected in the target identification stage to operate on diverse underlying biological mechanisms (Nica et al., 2022). That way, if a promising candidate based on in-vitro experiments triggers unexpected issues when tested in-vivo (e.g., undesirable side effects), other lead precursors relying on different pathways might be suitable replacements that are not subject to the same issues. Mathematically, finding a diverse set of precursors corresponds to identifying and sampling from the different modes of the black-box objective function mapping intervention representations to the corresponding effects on the disease phenotype (§ 2). Existing machine learning methods for iterative experimental design (e.g., active learning, Bayesian optimization) have the potential to aid in efficiently exploring this vast biological intervention space. However, to our knowledge, there is no method geared toward identifying the modes of the underlying black-box objective function to identify candidate interventions that are both effective and diverse (§ 6). To this end, we introduce DiscoBAX - a sample-efficient Bayesian Algorithm eXecution (BAX) method for discovering genomic intervention sets with both high expected change in the target phe- notype and high diversity to maximize chances of success in the following stages of drug development (Figure 1), which we formalize as set-valued maximization problem (Equation 4). After providing theoretical guarantees on the optimality of the presented approach under standard conditions, we perform a comprehensive experimental evaluation in both synthetic and real-world datasets. The experiments show that DiscoBAX outperforms existing state-of-the-art active learning and Bayesian optimization methods in designing genomic experiments that maximize the yield of findings that could lead to the discovery of new potentially treatable disease mechanisms. Our contributions are as follows: • We formalize the gene target identification problem (§ 3) and discuss limitations of existing methods in addressing this problem (§ 6). • We develop DiscoBAX - a sample-efficient BAX method for maximizing the rate of significant discoveries per experiment while simultaneously probing for a wide range of diverse mechanisms during a genomic experiment campaign (§ 4). • We provide theoretical guarantees that substantiate the optimality of DiscoBAX under standard assumptions (§ 4 and Appendix A). • We conduct a comprehensive experimental evaluation covering both synthetic as well as real-world experimental design tasks that demonstrate that DiscoBAX outperforms existing state-of-the-art methods for experimental design in this setting (§ 5). 2 BACKGROUND AND NOTATION Genomic experimentation is an early stage in drug discovery where geneticists assess the effect of genomic interventions on moving a set of disease-relevant phenotypes to determine suitable drug targets. In an abstract language, we assume a black-box function, f : G → R, that maps each gene, g ∈ G, to the value, f(g), corresponding to the magnitude of phenotypic change under gene knock out. The set, G, is finite, |G| = m < ∞, because there are a limited number of protein-encoding genes in the human genome (≈ 20, 000) (Pertea et al., 2018), and is formalizable by either the set of integers or one-hot vectors with dimension m. However, biologically informed embeddings, X : G → X , are often preferred to represent genes for their potential to capture genetic, functional relationships. We assume that gene embeddings, X(g) = x ∈ X ⊆ Rd, are d-dimensional variables, with m distinct members, |X | = m, thus, we use f(g) and f(x) interchangeably. In drug development, a candidate target must meet several criteria to proceed to subsequent stages in the development pipeline. For example, engaging the target – down- or up-regulating the gene – must move the phenotype significantly in the desired direction. Such genes are called “top-movers” of the phenotype. We can define the K top-movers for a given phenotype as members of the set,X = {x1,x2, . . . ,xm}, corresponding to the K largest values of {f(x1), f(x2), . . . , f(xm)}. However, each evaluation of the phenotype change, f , requires a CRISPR-Cas9 knockout experiment in the lab, which makes exhaustive experimentation infeasible even for the most resourceful institutions. Hence in practice, the experimentation budget is limited to T ≪ m experiments. Instead of choosing the K top-movers (requiring phenotype change knowledge, f(x), for all inputs x ∈ X ), a more practical approach is to form the subset, Xc ⊆ X , of genes that when knocked out lead to a change in the phenotype, f(x), larger than a selected threshold value, c, i.e. Xc := {x ∈ X : f(x) ≥ c}. Bayesian Algorithm Execution (BAX), proposed by Neiswanger et al. (2021), is a method to estimate the output, OA := OA(f), of an algorithm, A, run on a function, f , by evaluating the function on a budgeted set of inputs, {xi}Ti=1 ∈ X . Estimating a computable property is done by positing a probabilistic model for f for estimating OA. Data is acquired by searching for the value x ∈ X that maximizes the mutual information, I(Yx;OA | Dt), between the function output, Yx, and the algorithm output, OA. BAX assumes that functional output instances, yx, of the function, f , can be observed for each acquired x. The acquisition of data is sequential, where the information gain maximization procedure leads to a dataset of observations, Dt := {(xi, yxi)}t−1i=1 , at step t ∈ [T ]. BAX can be used in conjunction with a number of algorithms, such as determining the superlevel set (i.e. Xc), computing integrals, or finding local optima of f . Given that genomic experimentation seeks to find a diverse set of genes corresponding to the modes of f , the BAX framework is well suited to our task. Concretely, BAX acquisition functions select points by maximizing the expected information gain (EIG) obtained from each point about the output of the algorithm. Crucial to the applicability of BAX to our problem setting is the tractability of accurate approximators of the EIG for algorithms which, like the one we will propose, return a subset of their inputs. The exact computation of the EIG for arbitrary algorithms is not generally tractable; however, Neiswanger et al. (2021) present an approximation that only requires the computation of the entropy of the distribution over function values conditioned on algorithm outputs. EIGvt (x,Dt) = H(fip(x)|Dt)− Ep(S|Dt)[H(fip(x)|S,Dt)]. (1) When the model P is a Gaussian Process, both of these quantities are straightforward to compute: the first is the entropy of the GP’s predictive distribution at x, and we can estimate the second by conditioning a posterior on the values of elements in the set S. Monte Carlo approximation of this quantity is possible when the model P does not permit a closed form. 3 PROBLEM SETTING A primary challenge in the drug discovery pipeline is the discrepancy in outcomes between in vitro experimental data and in vivo diseases. Where In vitro experimental data can quantify the effect of a gene knockout on a specific aspect of a cellular phenotype in a petri dish, in vivo interactions between the drug and the organism may lead to weaker effect sizes or toxicity. The drug discovery pipeline consists of stages that start by testing a set of candidate interventions and then procedes by selecting a subset of promising candidates to pass on for further development. For example, one might test a broad range of gene knockouts on cell cultures and then select a subset to evaluate in animal models. These trials can be expensive, so it is desirable to weed out potentially ineffective or toxic candidates before this phase. To do so, researchers can leverage heuristic score functions that predict the ”drug-like-ness” or likelihood of toxicity of a compound (Jiménez-Luna et al., 2020). Considering a diverse set of candidate interventions, where each intervention applies to a different mechanism in the disease phenotype, is also of use because it increases the likelihood of at least one candidate succeeding in the subsequent phase. We formalize this problem as an optimization problem where the optimizer has access to a measurement correlated with the quantity of interest; however, it is noise augmented to emulate the primary objective function. We formalize our search space (i.e., the set of available genes, though in principle this could be any set) G = {g1, . . . , gm}, for which we have some phenotype measurement fip. We will primarily refer to fip as a function from features to phenotype changes, but it is equivalent to expressing fip as a function on genes G. The subscript ‘ip’ stands for intermediate phenotype as it is not the actual clinical measurement caused by the gene knockout. Instead, it is a measurement known to correlate with a disease pathology and is tractable in the lab setting (see Appendix B for detailed formalization). In this paper, we will assume the phenotype change is a real number fip(x) ∈ R; however, given suitable modeling assumptions, it is possible to extend our approach to vector-valued phenotype readouts. We also define a function called disease outcome, fout, which is composed of fip and factors outside the biological pathway, such as toxicity of a molecule that engages with a target gene. The noise component, η, encapsulates all these extra factors. We consider two tractable formulations of the relationship between the disease outcome, fout, and the in vitro phenotype, fip. 1. Multiplicative Bernoulli noise: fout(x; η) = fip(x)η(x) (2) where η(x) ∈ {0, 1},∀x ∈ G, and η is sampled from a Gaussian process classification model. This setting presents a simplified model of drug toxicity: η corresponds to a binary indicator of whether or not the drug is revealed to exhibit unwanted side effects in future trials. The multiplicative noise model assumes that the downstream performance of an intervention is monotone with respect to its effect on the phenotype, conditional on the compound not exhibiting toxicity in future trials. In our experiments, we assume η exhibits correlation structure over inputs corresponding to a GP classification model, and construct the kernel KX of this GP to depend on some notion of distance in the embedding space X . 2. Additive Gaussian noise: fout(x; η) = fip(x) + η(x) η ∼ GP(0,KX ) (3) where η : G → R is drawn from a Gaussian process model with kernel KX . In this case, we assume that the unforeseen effects of the input x are sufficiently numerous to resemble a Gaussian perturbation of the measured in vitro phenotype fip(x). Notice that in the above models, noise is an umbrella term for everything that affects the fitness of a target but is not part of the biological pathway from the gene to the phenotype change. Therefore, the choice of noise distribution and how it affects the outcome is a modelling assumption that is intended to capture coarse inductive biases known to the researcher. We additionally seek out a set of interventions S ⊂ G of some fixed size |S| = k whose elements cause the maximum expected change (for some noise distribution) in the disease outcome. In other words, we seek an intervention that best moves the disease phenotype, which will be the best candidate drug. This goal is distinct from either sampling the super-level-sets of fip or finding the set S with the best average performance. Instead, we explicitly seek to identify a set of points whose toxicity or unintended side effects will be minimally correlated, maximizing the odds that at least one will succeed in the subsequent trials. We thus obtain a set-valued maximization problem max S⊆X Eη [ max x∈S fout(x; η) ] . (4) This compact formula is critical to attain our overarching objective: identifying interventions with both a large impact on the phenotype of interest and with high diversity to increase the chance of success of some of them in the subsequent steps of the drug discovery pipeline. An illustrative example is provided in Figure 6 in the Appendix to provide further intuition into this formula. The general formulation of this problem is NP-hard (Goel et al., 2010); therefore, we propose a tractable algorithm that provides a constant-factor approximation of the optimal solution by leveraging the submodular structure of the objective under suitable modeling assumptions. Given such an algorithm, our task is the active learning problem of optimally querying the function, fip, given a limited number of trials, T , to accurately estimate the algorithm’s output on the ground-truth dataset. Importantly, this formulation allows us to decouple modeling the measured phenotype, fip, from modeling the noise η. For example, we might make the modeling assumption that we sample fip from a GP with some kernel k1 and that η is a Bernoulli random variable indicating the safety of the compound. 4 METHOD Various methods exist for efficiently optimizing black-box functions; however, our problem setting violates several assumptions underlying these approaches. In particular, while we assume access to intermediate readouts fip, the actual optimization target of interest fout is not observable. Further, we seek to find a set of interventions that maximize its expected value under some modeling assumptions. These two properties render a broad range of prior art inapplicable. Active sampling methods do not prioritize high-value regions of the input space. Bayesian optimization methods assume access to the ground-truth function outputs (or a noisy observation thereof). And Bayesian algorithm execution approaches based on level-set sampling may not sufficiently decorrelate the hidden noise in the outcome. We propose an intervention set selection algorithm in a Bayesian algorithm execution procedure that leverages the modeling assumptions we characterize in the previous section. This method, Subset Discovery via Bayesian Algorithm Execution (DiscoBAX), consists of two distinct parts. (1) a subset-selection algorithm obtaining a 1− 1/e-factor approximation of the set that optimizes equation 3, and (2) an outer BAX loop that queries the phenotype readings to maximize the information gain about the output of this algorithm. In Section 4.1, we present the idealized form of DiscoBAX and show that it attains an approximately optimal solution. Our approach is easily adaptable to incorporate approximate posterior sampling methods, enabling its use with deep neural networks on high-dimensional datasets. We outline this practical implementation in Section 4.2. 4.1 ALGORITHM Subset maximization: we first address the problem of identifying a subset S ⊂ X which maximizes the value Eη[maxx∈S fout(x; η)] As mentioned previously, the exact maximization of this objective is intractable. To construct a tractable approximation, we propose a submodular surrogate objective, under which the value of an intervention is lower-bounded by zero f∗out(x; η) = max(fout(x; η), 0). This choice is motivated by the intuition that any intervention with a negative expected value on the phenotype is equally useless as it will not be considered in later experiment iterations, and so we do not need to distinguish between harmful interventions. The resulting function f(S) = Eη[maxx∈S f∗out(x; η)] will be submodular, and thus Algorithm 1, the greedy algorithm, will provide a 1− 1/e approximation of the optimal solution. Observation 1. The score function f : P(G)→ R defined by f(S) = Eη [ max x∈S ( max(0, fout(x; η) )] (5) is submodular. We provide proof of this result in Appendix A. In practice, we can estimate the expected value in this objective using Monte Carlo (MC) samples over the noise distribution η. Where MC sampling is too expensive, a heuristic that uses a threshold to remove points whose values under η are too highly correlated can also obtain comparable results with a reduced computational burden. Algorithm 1 SubsetSelect (Multiplicative Noise) Require: integer k > 0, set X , distribution P (η), sampled f̂ip : X → R S ← ∅ f̂out(x; η) := f̂ip(x)η(x) for i < k do S ← S ∪ {argmax x∈X\S Eη[ max y∈S∪{x} f̂out(x; η)]} end for return S Algorithm 2 DiscoBAX Require: finite sample set X , budget T , Monte Carlo parameter ℓ ∈ N D ← ∅ for i < T do sample {f̂ip}ℓj=1 ∼ P (fip|D) Sj ← SubsetSelect(f̂ip,j),∀j = 1, . . . , ℓ xi ← argmaxx∈X EIGv(x, Sℓj=1) query fip(xi) D = D ∪ {(xi, fip(xi)} end for return D Active sampling: because we do not assume prior knowledge of the phenotype function fip, we require a means of selecting potential interventions for querying its value at a specified input x. In practice, running these experiments may incur a cost, and so it is desirable to minimize the number of queries necessary to obtain an accurate estimate of the optimal intervention set. BAX (Neiswanger et al., 2021) presents an effective active sampling approach to approximate the output of an algorithm using a minimal number of queries to the dataset of interest. In our setting, this allows us to approximate the output of Algorithm 1 over the set (X , fip(X )) without incurring the cost of evaluating the effect of every knockout intervention in G. Concretely, this procedure takes as input some probabilistic model P which defines a distribution over phenotype readings fip conditioned on the data Dt seen so far and from which it is possible to draw samples. A remark on the efficiency of subset maximization & active sampling— It has to be emphasized that subset selection is a function called within each active sampling cycle. Hence, the above observation about submodularity refers specifically to Algorithm 1 rather than its incorporation in Algorithm 2. If sample efficiency is not a concern this algorithm could be run on the set of all inputs and provide the exact solution. We outline this procedure in Algorithm 2, and refer to Section 2 for additional details. In the batch acquisition setting, we form batches of size B at each cycle by selecting the B points with the highest EIG values. 4.2 PRACTICAL IMPLEMENTATION IN HIGH DIMENSIONS When working with high-dimensional input features, we typically leverage Bayesian Neural Networks in lieu of Gaussian Processes. We sample from the parameter distribution via Monte Carlo dropout (MCD) (Gal & Ghahramani, 2016), and rely on Monte Carlo simulation to estimate the quantities introduced in Algorithm 2. In particular, the entropy of the posterior distribution is obtained as follows: H(yx|Dt) = Ep(yx|Dt) [log p(yx|Dt)] ∼ 1 M M∑ s=1 log p(ysx|Dt, fs) (6) where the samples {ysx = fs(x)}Mi=1 are obtained by sampling from the distribution over model parameters with MCD to obtain the parameter samples {fs}Mi=1. 5 EXPERIMENTS In the experimental evaluation of DiscoBAX, we specifically seek to answer the following questions: 1) Does DiscoBAX allow us to reach a better trade-off between recovery of the top interventions and their diversity (Table 1 and 2)? 2) Is the method sample-efficient, i.e., identifies global optima in fewer experiments relative to random sampling or naive optimization baselines (Figure 3 and 5)? 3) Is the performance of DiscoBAX sensitive to various hyperparameter choices (Appendix D.3)? To address these questions, we first focus on experiments involving synthetic datasets (§ 5.1) in which we know the underlying ground truth objective function. We then conduct experiments across several large-scale experimental assays from the GeneDisco benchmark Mehrjou et al. (2021) that cover a diverse set of disease phenotypes. 5.1 SYNTHETIC DATA We begin with a concrete example to illustrate the distinction between the behavior DiscoBAX and existing methods. The dataset we consider is a one-dimensional regression task on a mixture-ofGaussians density function fmog. We construct fmog such that it exhibits several local optima at a variety of values, necessitating a careful trade-off between exploration and exploitation to optimize the DiscoBAX objective. Crucially, exploitation in this setting requires not only an accurate estimation of the global optimum but also an accurate estimation of the local optima. We provide evaluations on additional datasets in Appendix D.1. We consider the following baseline acquisition functions which select the optimal point x∗ to query at each iteration, letting µ(x) denote the posterior mean over fip(x) and σ2(x) its variance. We evaluate random sampling, a UCB-like acquisition function, BAX on super-level set and top-k algorithms, Thompson sampling, and uncertainty maximization baselines. Full details are provided in Appendix D.1. In Figure 2, we visualize the solutions found by each approach after 30 iterations. We further evaluate the score of each method, computed as Eη maxx∈S fip(x)η(x), where η is drawn from a Bernoulli distribution whose logits are determined by an affine transformation of a sample from a GP with zero mean and radial basis function covariance kernel. This construction ensures a high correlation between the values of nearby inputs and reward sets S whose elements are distant from each other. To select S, we use the learned posterior mean µ from each acquisition strategy as input to Algorithm 1 and set S to be equal to its output. We observe that most baselines over-exploit the high-value local optima, leading to inaccuracies on the lower optima. As a result, Algorithm 1 is unable to select the optimal subset elements from the lower-value modes and the model score suffers. The active sampling baseline yields a more uniform sampling distribution over inputs that results in a relatively uniform distribution of errors. While DiscoBAX does not perfectly estimate the value of the target function, its sampling strategy yields reasonably accurate estimates of all of the local optima. 5.2 GENEDISCO DATASET Datasets & baselines. The GeneDisco benchmark (Mehrjou et al., 2021) is comprised of five large-scale genome-wide CRISPR assays and compares the relative strengths of nine active learning algorithms (eg., Margin sampling, Coreset) for optimal experimental design. The objective of the different methods is to select the set of interventions (ie., genetic knockouts) with the largest impact on the corresponding disease phenotype. We include all existing baselines from the GeneDisco benchmark, as well as eight additional approaches: UCB, qUCB, qEI, qPOI, Thompson sampling, Top-K BAX, Levelset BAX, and DiscoBAX. Metrics & approach. We define the set of optimal interventions as the ones in the top percentile of the experimentally-measured phenotype (referred to as ‘Top-K interventions’). We use the TopK recall metric to assess the ability of the different methods to identify the best interventions. To quantify the diversity across the set of optimal interventions, we first cluster these interventions in a lower-dimensional subspace (details provided in Appendix C). We then measure the proportion of these clusters that are recalled (i.e., any of its members are selected) by a given algorithm over the different experiment cycles. The overall score of an approach is defined as the geometric mean between Top-K recall and the diversity metric. For all methods and datasets, we perform 25 consecutive batch acquisition cycles (with batch size 32). All experiments are repeated 10 times with different random seeds. Results & discussion. We observe that, across the different datasets, DiscoBAX enables to identify a more diverse set of optimal interventions relative to baselines (Table 1). It does so in a sample- efficient manner as it achieves higher diversity throughout the different acquisition cycles (Fig.3). Note that sample-efficiency is an empirical observation here not a theoretical property of the algorithm since it is possible to construct adversarial datasets where a BAX method will attain no better performance than random sampling. Interestingly, it tends to recall a higher share of optimal interventions on several assays as well, which may be the result of very steep extrema in the corresponding datasets. We also find the performance of DiscoBAX to be relatively insensitive to the choice of hyperparameters (Appendix D.3). Lastly, we note that when the input feature space (ie., the intervention representation) does not correlate much with the disease phenotype of interest, the model being learned tends to perform poorly and we observe no lift between the different methods and random sampling (eg., the SARS-CoV-2 assay from Zhu et al. (2021) – see Appendix D.2). 6 RELATED WORK Prior works have studied the application of genomic discovery and method development for diverse target generation. Bayesian optimization: Bayesian optimization (BO) is concerned with finding the global optimum of a function with the fewest number of function evaluations (Snoek et al., 2012; Shahriari et al., 2015). Since this target function is often expensive-to-evaluate, one typically uses a Gaussian process as a surrogate function (Srinivas et al.). The candidates for function evaluation are then determined through a so-called acquisition function, which is often expressed as the expected utility over the surrogate model. Typical choices include the expected improvement (Močkus, 1975, EI) and probability of improvement (Kushner, 1964, PI) as utility functions. Recent work includes variational approaches Song et al. (2022) which yield a tractable acquisition function whose limiting behavior is equivalent to PI. Prior work tried to obtain diversity in Bayesian optimization e.g. through a batch setting (Kirsch et al., 2019) or multi-objective optimization (Hernández-Lobato et al., 2016). Bayesian optimization has been applied to biological problem settings such as small molecule optimization (Korovina et al., 2020) or automatic chemical design (Griffiths & HernándezLobato, 2017). Optimal experiment design broadens the scope of Bayesian Optimization: rather than simply maximizing a parametric function, the task is to adaptively identify an optimal set of experiments to efficiently reach some goal (Robbins, 1952; Chernoff, 1959). Applying machine learning to automate hypothesis generation and testing goes back multiple decades (King et al., 2004). Optimal experiment design is amenable to Bayesian optimization (Greenhill et al., 2020) and reinforcement learning approaches (Kandasamy et al., 2019). Most related to our work is Bayesian Algorithm Execution (BAX) Neiswanger et al. (2021) that extends the goal of experiment design from only finding the maximum of a function to estimating more general properties such as level sets by computing the expected information gain (EIG) which is the mutual information between the evaluation of an input point and the statistics related that property. Active learning While many probabilistic models like Gaussian processes provide principled uncertainty estimates (Rasmussen, 2003), modern neural network architectures often rely on heuristics or only provide approximations approaches (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017). Active learning based approaches use the uncertainty estimates for maximizing expected information gains of model parameters (Houlsby et al., 2011). Recently, more and more approaches have used active learning based on model uncertainties of neural networks for biomedical applications. Bandits: The upper confidence bounds seen in BO originate in the bandit setting (Lai & Robbins, 1985), in which one can extend the widely-used UCB algorithm to Gaussian processes (Grünewälder et al., 2010; Srinivas et al.). While both bandits and BO seek to find the maximum of a function, the two problem settings leverage different notions of optimality. BO seeks to identify the argmax, whereas bandits seek to minimize the number of sub-optimal queries. Related to bandits and BO, some efforts are made to formulate active learning as a reinforcement learning problem (Slade & Branson, 2022; Casanova et al., 2020; Konyushkova et al., 2017; Pang et al., 2018). 7 CONCLUSION We have introduced a mathematical formalization of the drug discovery problem that captures the noise induced by moving from in vitro to in vivo experiments. We proposed a novel algorithm based on Bayesian Algorithm Execution and illustrated its utility on many illustrative synthetic datasets. We have further evaluated this class of methods against the real-world large-scale assays from the GeneDisco benchmark, where they help identify diverse top interventions better than existing baselines. Future work could see the extension of the current framework to explicitly account for the fact that experimental cycles happen in batches. Further, we assume in this work that distant representations of interventions implied different underlying biological mechanisms - a proper causal formulation of the problem would allow us to tell apart causally connected pathways more cleanly. Finally, it is typical practice to measure several potential intermediate phenotypes of interest to capture different aspects of interest, which requires an extension of our approach to the setting of multiple objectives. 8 REPRODUCIBILITY STATEMENT We clearly state our modelling assumptions throughout Sections 2 to 4. We provide proof for our theoretical claims in Appendix A. All experimental results reported in Section 5 and appendix D can be reproduced using the code available at: https://github.com/anonymous35780/ solaris-2023-iclr. Hyper-parameter sweeps for the BAX methods for GeneDisco are presented in Table 3.
1. What is the focus and contribution of the paper regarding iterative selection of optimal targets for genetic interventions? 2. What are the strengths and weaknesses of the proposed method, particularly in its application and technical aspects? 3. Do you have any concerns or questions about the paper's content, such as clarity, quality, novelty, and reproducibility?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper is concerned with the iterative selection of an optimal set of targets for genetic interventions based on a scalar readout. The methods section described a particular instance of BAX, that models the singularity of the biological problem that is considered. A few aspects that makes it different from the existing InfoBAX approach is (1) the subset maximization target function and (2) a practical uncertainty model with BNN instead of GPs. Then, the paper provides a set of experiments on synthetic data, and on the openly available GeneDisco benchmark. Strengths And Weaknesses Strength: (1) Novelty in its application: this is one of the (relatively) few papers that tackle this important scientific problem (other example I could find is https://arxiv.org/pdf/2207.12805.pdf). (2) The paper shows improved results in the recent GeneDisco benchmark, for a few of the datasets. Weaknesses: (1) My main issue is that I am having a hard time understanding the technical novelty of the paper. A lot of the concepts explained in Section 4 seems to belong more to a background section, as they are not contributions of this particular work, but from the BAX paper. As far as I understand, the contributions are (a) working with sets, and performing greedy set optimization and (b) using BNN for uncertainty quantification. However, in the results section, I don't see a clear explanation that those changes are improving the performance. It seems that (a) improves diversity? (if so, it should be made clear), but (b) has no comparison to different flavors of BNNs, GPs etc.. (2) there is no theoretical analysis on how the (1 - 1/e) approximation error propagates through the BAX procedure. Then, it is not correct for the author to write "theoretical guarantees on the optimality of the approach". Similarly, the authors write that DiscoBAX is "sample-efficient", but I have seen no proofs of sample efficiency, and the experimental results are not necessarily strong enough to claim this. (3) There is some improvement over BAX, but only in a few datasets (2 / 4). (4) There should be a comparison to other BNN flavors, and at the very least a discussion of uncertainty modeling w/ neural networks Clarity, Quality, Novelty And Reproducibility The paper does not make its contributions clear, and I do think that it is currently an issue. Otherwise, the paper reads well. The application of BAX to genomics is novel (but the practical implications for the field of genetics are rather unclear), but the improvement over BAX is not clear (as the paper is written now).
ICLR
Title DiscoBAX - Discovery of optimal intervention sets in genomic experiment design Abstract The discovery of therapeutics to treat genetically-driven pathologies relies on identifying genes involved in the underlying disease mechanism. With billions of potential hypotheses to test, an exhaustive exploration of the entire space of potential interventions is impossible in practice. Sample-efficient methods based on active learning or Bayesian optimization bear the promise of identifying targets of interest using as few experiments as possible. However, genomic perturbation experiments typically rely on proxy outcomes measured in biological model systems that may not completely correlate with the results of interventions in humans. In practical experiment design, one aims to find a set of interventions that maximally move a target phenotype via a diverse mechanism set to reduce the risk of failure in future stages of trials. To that end, we introduce DiscoBAX — a sample-efficient algorithm for genomic intervention discovery that maximizes the desired movement of a phenotype while covering a diverse set of underlying mechanisms. We provide theoretical guarantees on the optimality of the approach under standard assumptions, conduct extensive experiments in synthetic and realworld settings relevant to genomic discovery, and demonstrate that DiscoBax outperforms state-of-the-art active learning and Bayesian optimization methods in this task. Better methods for selecting effective and diverse perturbations in biological systems could enable researchers to discover novel therapeutics for many genetically-driven diseases. 1 INTRODUCTION Genomic experiments probing the function of genes under realistic cellular conditions are the cornerstone of modern early-stage drug target discovery and validation; moreover, they are used to identify effective modulators of one or more disease-relevant cellular processes. These experiments, for example using Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) (Jehuda et al., 2018) perturbations, are both time and resource-intensive (Dickson & Gagnon, 2004; 2009; DiMasi et al., 2016; Berdigaliyev & Aljofan, 2020). Therefore, an exhaustive search of the billions of potential experimental protocols covering all possible experimental conditions, cell states, cell types, and perturbations (Trapnell, 2015; Hasin et al., 2017; Worzfeld et al., 2017; Chappell et al., 2018; MacLean et al., 2018; Chappell et al., 2018) is infeasible even for the world’s largest biomedical research institutes. Furthermore, to mitigate the chances of failure in subsequent stages of the drug design pipeline, it is desirable for the subset of precursors selected in the target identification stage to operate on diverse underlying biological mechanisms (Nica et al., 2022). That way, if a promising candidate based on in-vitro experiments triggers unexpected issues when tested in-vivo (e.g., undesirable side effects), other lead precursors relying on different pathways might be suitable replacements that are not subject to the same issues. Mathematically, finding a diverse set of precursors corresponds to identifying and sampling from the different modes of the black-box objective function mapping intervention representations to the corresponding effects on the disease phenotype (§ 2). Existing machine learning methods for iterative experimental design (e.g., active learning, Bayesian optimization) have the potential to aid in efficiently exploring this vast biological intervention space. However, to our knowledge, there is no method geared toward identifying the modes of the underlying black-box objective function to identify candidate interventions that are both effective and diverse (§ 6). To this end, we introduce DiscoBAX - a sample-efficient Bayesian Algorithm eXecution (BAX) method for discovering genomic intervention sets with both high expected change in the target phe- notype and high diversity to maximize chances of success in the following stages of drug development (Figure 1), which we formalize as set-valued maximization problem (Equation 4). After providing theoretical guarantees on the optimality of the presented approach under standard conditions, we perform a comprehensive experimental evaluation in both synthetic and real-world datasets. The experiments show that DiscoBAX outperforms existing state-of-the-art active learning and Bayesian optimization methods in designing genomic experiments that maximize the yield of findings that could lead to the discovery of new potentially treatable disease mechanisms. Our contributions are as follows: • We formalize the gene target identification problem (§ 3) and discuss limitations of existing methods in addressing this problem (§ 6). • We develop DiscoBAX - a sample-efficient BAX method for maximizing the rate of significant discoveries per experiment while simultaneously probing for a wide range of diverse mechanisms during a genomic experiment campaign (§ 4). • We provide theoretical guarantees that substantiate the optimality of DiscoBAX under standard assumptions (§ 4 and Appendix A). • We conduct a comprehensive experimental evaluation covering both synthetic as well as real-world experimental design tasks that demonstrate that DiscoBAX outperforms existing state-of-the-art methods for experimental design in this setting (§ 5). 2 BACKGROUND AND NOTATION Genomic experimentation is an early stage in drug discovery where geneticists assess the effect of genomic interventions on moving a set of disease-relevant phenotypes to determine suitable drug targets. In an abstract language, we assume a black-box function, f : G → R, that maps each gene, g ∈ G, to the value, f(g), corresponding to the magnitude of phenotypic change under gene knock out. The set, G, is finite, |G| = m < ∞, because there are a limited number of protein-encoding genes in the human genome (≈ 20, 000) (Pertea et al., 2018), and is formalizable by either the set of integers or one-hot vectors with dimension m. However, biologically informed embeddings, X : G → X , are often preferred to represent genes for their potential to capture genetic, functional relationships. We assume that gene embeddings, X(g) = x ∈ X ⊆ Rd, are d-dimensional variables, with m distinct members, |X | = m, thus, we use f(g) and f(x) interchangeably. In drug development, a candidate target must meet several criteria to proceed to subsequent stages in the development pipeline. For example, engaging the target – down- or up-regulating the gene – must move the phenotype significantly in the desired direction. Such genes are called “top-movers” of the phenotype. We can define the K top-movers for a given phenotype as members of the set,X = {x1,x2, . . . ,xm}, corresponding to the K largest values of {f(x1), f(x2), . . . , f(xm)}. However, each evaluation of the phenotype change, f , requires a CRISPR-Cas9 knockout experiment in the lab, which makes exhaustive experimentation infeasible even for the most resourceful institutions. Hence in practice, the experimentation budget is limited to T ≪ m experiments. Instead of choosing the K top-movers (requiring phenotype change knowledge, f(x), for all inputs x ∈ X ), a more practical approach is to form the subset, Xc ⊆ X , of genes that when knocked out lead to a change in the phenotype, f(x), larger than a selected threshold value, c, i.e. Xc := {x ∈ X : f(x) ≥ c}. Bayesian Algorithm Execution (BAX), proposed by Neiswanger et al. (2021), is a method to estimate the output, OA := OA(f), of an algorithm, A, run on a function, f , by evaluating the function on a budgeted set of inputs, {xi}Ti=1 ∈ X . Estimating a computable property is done by positing a probabilistic model for f for estimating OA. Data is acquired by searching for the value x ∈ X that maximizes the mutual information, I(Yx;OA | Dt), between the function output, Yx, and the algorithm output, OA. BAX assumes that functional output instances, yx, of the function, f , can be observed for each acquired x. The acquisition of data is sequential, where the information gain maximization procedure leads to a dataset of observations, Dt := {(xi, yxi)}t−1i=1 , at step t ∈ [T ]. BAX can be used in conjunction with a number of algorithms, such as determining the superlevel set (i.e. Xc), computing integrals, or finding local optima of f . Given that genomic experimentation seeks to find a diverse set of genes corresponding to the modes of f , the BAX framework is well suited to our task. Concretely, BAX acquisition functions select points by maximizing the expected information gain (EIG) obtained from each point about the output of the algorithm. Crucial to the applicability of BAX to our problem setting is the tractability of accurate approximators of the EIG for algorithms which, like the one we will propose, return a subset of their inputs. The exact computation of the EIG for arbitrary algorithms is not generally tractable; however, Neiswanger et al. (2021) present an approximation that only requires the computation of the entropy of the distribution over function values conditioned on algorithm outputs. EIGvt (x,Dt) = H(fip(x)|Dt)− Ep(S|Dt)[H(fip(x)|S,Dt)]. (1) When the model P is a Gaussian Process, both of these quantities are straightforward to compute: the first is the entropy of the GP’s predictive distribution at x, and we can estimate the second by conditioning a posterior on the values of elements in the set S. Monte Carlo approximation of this quantity is possible when the model P does not permit a closed form. 3 PROBLEM SETTING A primary challenge in the drug discovery pipeline is the discrepancy in outcomes between in vitro experimental data and in vivo diseases. Where In vitro experimental data can quantify the effect of a gene knockout on a specific aspect of a cellular phenotype in a petri dish, in vivo interactions between the drug and the organism may lead to weaker effect sizes or toxicity. The drug discovery pipeline consists of stages that start by testing a set of candidate interventions and then procedes by selecting a subset of promising candidates to pass on for further development. For example, one might test a broad range of gene knockouts on cell cultures and then select a subset to evaluate in animal models. These trials can be expensive, so it is desirable to weed out potentially ineffective or toxic candidates before this phase. To do so, researchers can leverage heuristic score functions that predict the ”drug-like-ness” or likelihood of toxicity of a compound (Jiménez-Luna et al., 2020). Considering a diverse set of candidate interventions, where each intervention applies to a different mechanism in the disease phenotype, is also of use because it increases the likelihood of at least one candidate succeeding in the subsequent phase. We formalize this problem as an optimization problem where the optimizer has access to a measurement correlated with the quantity of interest; however, it is noise augmented to emulate the primary objective function. We formalize our search space (i.e., the set of available genes, though in principle this could be any set) G = {g1, . . . , gm}, for which we have some phenotype measurement fip. We will primarily refer to fip as a function from features to phenotype changes, but it is equivalent to expressing fip as a function on genes G. The subscript ‘ip’ stands for intermediate phenotype as it is not the actual clinical measurement caused by the gene knockout. Instead, it is a measurement known to correlate with a disease pathology and is tractable in the lab setting (see Appendix B for detailed formalization). In this paper, we will assume the phenotype change is a real number fip(x) ∈ R; however, given suitable modeling assumptions, it is possible to extend our approach to vector-valued phenotype readouts. We also define a function called disease outcome, fout, which is composed of fip and factors outside the biological pathway, such as toxicity of a molecule that engages with a target gene. The noise component, η, encapsulates all these extra factors. We consider two tractable formulations of the relationship between the disease outcome, fout, and the in vitro phenotype, fip. 1. Multiplicative Bernoulli noise: fout(x; η) = fip(x)η(x) (2) where η(x) ∈ {0, 1},∀x ∈ G, and η is sampled from a Gaussian process classification model. This setting presents a simplified model of drug toxicity: η corresponds to a binary indicator of whether or not the drug is revealed to exhibit unwanted side effects in future trials. The multiplicative noise model assumes that the downstream performance of an intervention is monotone with respect to its effect on the phenotype, conditional on the compound not exhibiting toxicity in future trials. In our experiments, we assume η exhibits correlation structure over inputs corresponding to a GP classification model, and construct the kernel KX of this GP to depend on some notion of distance in the embedding space X . 2. Additive Gaussian noise: fout(x; η) = fip(x) + η(x) η ∼ GP(0,KX ) (3) where η : G → R is drawn from a Gaussian process model with kernel KX . In this case, we assume that the unforeseen effects of the input x are sufficiently numerous to resemble a Gaussian perturbation of the measured in vitro phenotype fip(x). Notice that in the above models, noise is an umbrella term for everything that affects the fitness of a target but is not part of the biological pathway from the gene to the phenotype change. Therefore, the choice of noise distribution and how it affects the outcome is a modelling assumption that is intended to capture coarse inductive biases known to the researcher. We additionally seek out a set of interventions S ⊂ G of some fixed size |S| = k whose elements cause the maximum expected change (for some noise distribution) in the disease outcome. In other words, we seek an intervention that best moves the disease phenotype, which will be the best candidate drug. This goal is distinct from either sampling the super-level-sets of fip or finding the set S with the best average performance. Instead, we explicitly seek to identify a set of points whose toxicity or unintended side effects will be minimally correlated, maximizing the odds that at least one will succeed in the subsequent trials. We thus obtain a set-valued maximization problem max S⊆X Eη [ max x∈S fout(x; η) ] . (4) This compact formula is critical to attain our overarching objective: identifying interventions with both a large impact on the phenotype of interest and with high diversity to increase the chance of success of some of them in the subsequent steps of the drug discovery pipeline. An illustrative example is provided in Figure 6 in the Appendix to provide further intuition into this formula. The general formulation of this problem is NP-hard (Goel et al., 2010); therefore, we propose a tractable algorithm that provides a constant-factor approximation of the optimal solution by leveraging the submodular structure of the objective under suitable modeling assumptions. Given such an algorithm, our task is the active learning problem of optimally querying the function, fip, given a limited number of trials, T , to accurately estimate the algorithm’s output on the ground-truth dataset. Importantly, this formulation allows us to decouple modeling the measured phenotype, fip, from modeling the noise η. For example, we might make the modeling assumption that we sample fip from a GP with some kernel k1 and that η is a Bernoulli random variable indicating the safety of the compound. 4 METHOD Various methods exist for efficiently optimizing black-box functions; however, our problem setting violates several assumptions underlying these approaches. In particular, while we assume access to intermediate readouts fip, the actual optimization target of interest fout is not observable. Further, we seek to find a set of interventions that maximize its expected value under some modeling assumptions. These two properties render a broad range of prior art inapplicable. Active sampling methods do not prioritize high-value regions of the input space. Bayesian optimization methods assume access to the ground-truth function outputs (or a noisy observation thereof). And Bayesian algorithm execution approaches based on level-set sampling may not sufficiently decorrelate the hidden noise in the outcome. We propose an intervention set selection algorithm in a Bayesian algorithm execution procedure that leverages the modeling assumptions we characterize in the previous section. This method, Subset Discovery via Bayesian Algorithm Execution (DiscoBAX), consists of two distinct parts. (1) a subset-selection algorithm obtaining a 1− 1/e-factor approximation of the set that optimizes equation 3, and (2) an outer BAX loop that queries the phenotype readings to maximize the information gain about the output of this algorithm. In Section 4.1, we present the idealized form of DiscoBAX and show that it attains an approximately optimal solution. Our approach is easily adaptable to incorporate approximate posterior sampling methods, enabling its use with deep neural networks on high-dimensional datasets. We outline this practical implementation in Section 4.2. 4.1 ALGORITHM Subset maximization: we first address the problem of identifying a subset S ⊂ X which maximizes the value Eη[maxx∈S fout(x; η)] As mentioned previously, the exact maximization of this objective is intractable. To construct a tractable approximation, we propose a submodular surrogate objective, under which the value of an intervention is lower-bounded by zero f∗out(x; η) = max(fout(x; η), 0). This choice is motivated by the intuition that any intervention with a negative expected value on the phenotype is equally useless as it will not be considered in later experiment iterations, and so we do not need to distinguish between harmful interventions. The resulting function f(S) = Eη[maxx∈S f∗out(x; η)] will be submodular, and thus Algorithm 1, the greedy algorithm, will provide a 1− 1/e approximation of the optimal solution. Observation 1. The score function f : P(G)→ R defined by f(S) = Eη [ max x∈S ( max(0, fout(x; η) )] (5) is submodular. We provide proof of this result in Appendix A. In practice, we can estimate the expected value in this objective using Monte Carlo (MC) samples over the noise distribution η. Where MC sampling is too expensive, a heuristic that uses a threshold to remove points whose values under η are too highly correlated can also obtain comparable results with a reduced computational burden. Algorithm 1 SubsetSelect (Multiplicative Noise) Require: integer k > 0, set X , distribution P (η), sampled f̂ip : X → R S ← ∅ f̂out(x; η) := f̂ip(x)η(x) for i < k do S ← S ∪ {argmax x∈X\S Eη[ max y∈S∪{x} f̂out(x; η)]} end for return S Algorithm 2 DiscoBAX Require: finite sample set X , budget T , Monte Carlo parameter ℓ ∈ N D ← ∅ for i < T do sample {f̂ip}ℓj=1 ∼ P (fip|D) Sj ← SubsetSelect(f̂ip,j),∀j = 1, . . . , ℓ xi ← argmaxx∈X EIGv(x, Sℓj=1) query fip(xi) D = D ∪ {(xi, fip(xi)} end for return D Active sampling: because we do not assume prior knowledge of the phenotype function fip, we require a means of selecting potential interventions for querying its value at a specified input x. In practice, running these experiments may incur a cost, and so it is desirable to minimize the number of queries necessary to obtain an accurate estimate of the optimal intervention set. BAX (Neiswanger et al., 2021) presents an effective active sampling approach to approximate the output of an algorithm using a minimal number of queries to the dataset of interest. In our setting, this allows us to approximate the output of Algorithm 1 over the set (X , fip(X )) without incurring the cost of evaluating the effect of every knockout intervention in G. Concretely, this procedure takes as input some probabilistic model P which defines a distribution over phenotype readings fip conditioned on the data Dt seen so far and from which it is possible to draw samples. A remark on the efficiency of subset maximization & active sampling— It has to be emphasized that subset selection is a function called within each active sampling cycle. Hence, the above observation about submodularity refers specifically to Algorithm 1 rather than its incorporation in Algorithm 2. If sample efficiency is not a concern this algorithm could be run on the set of all inputs and provide the exact solution. We outline this procedure in Algorithm 2, and refer to Section 2 for additional details. In the batch acquisition setting, we form batches of size B at each cycle by selecting the B points with the highest EIG values. 4.2 PRACTICAL IMPLEMENTATION IN HIGH DIMENSIONS When working with high-dimensional input features, we typically leverage Bayesian Neural Networks in lieu of Gaussian Processes. We sample from the parameter distribution via Monte Carlo dropout (MCD) (Gal & Ghahramani, 2016), and rely on Monte Carlo simulation to estimate the quantities introduced in Algorithm 2. In particular, the entropy of the posterior distribution is obtained as follows: H(yx|Dt) = Ep(yx|Dt) [log p(yx|Dt)] ∼ 1 M M∑ s=1 log p(ysx|Dt, fs) (6) where the samples {ysx = fs(x)}Mi=1 are obtained by sampling from the distribution over model parameters with MCD to obtain the parameter samples {fs}Mi=1. 5 EXPERIMENTS In the experimental evaluation of DiscoBAX, we specifically seek to answer the following questions: 1) Does DiscoBAX allow us to reach a better trade-off between recovery of the top interventions and their diversity (Table 1 and 2)? 2) Is the method sample-efficient, i.e., identifies global optima in fewer experiments relative to random sampling or naive optimization baselines (Figure 3 and 5)? 3) Is the performance of DiscoBAX sensitive to various hyperparameter choices (Appendix D.3)? To address these questions, we first focus on experiments involving synthetic datasets (§ 5.1) in which we know the underlying ground truth objective function. We then conduct experiments across several large-scale experimental assays from the GeneDisco benchmark Mehrjou et al. (2021) that cover a diverse set of disease phenotypes. 5.1 SYNTHETIC DATA We begin with a concrete example to illustrate the distinction between the behavior DiscoBAX and existing methods. The dataset we consider is a one-dimensional regression task on a mixture-ofGaussians density function fmog. We construct fmog such that it exhibits several local optima at a variety of values, necessitating a careful trade-off between exploration and exploitation to optimize the DiscoBAX objective. Crucially, exploitation in this setting requires not only an accurate estimation of the global optimum but also an accurate estimation of the local optima. We provide evaluations on additional datasets in Appendix D.1. We consider the following baseline acquisition functions which select the optimal point x∗ to query at each iteration, letting µ(x) denote the posterior mean over fip(x) and σ2(x) its variance. We evaluate random sampling, a UCB-like acquisition function, BAX on super-level set and top-k algorithms, Thompson sampling, and uncertainty maximization baselines. Full details are provided in Appendix D.1. In Figure 2, we visualize the solutions found by each approach after 30 iterations. We further evaluate the score of each method, computed as Eη maxx∈S fip(x)η(x), where η is drawn from a Bernoulli distribution whose logits are determined by an affine transformation of a sample from a GP with zero mean and radial basis function covariance kernel. This construction ensures a high correlation between the values of nearby inputs and reward sets S whose elements are distant from each other. To select S, we use the learned posterior mean µ from each acquisition strategy as input to Algorithm 1 and set S to be equal to its output. We observe that most baselines over-exploit the high-value local optima, leading to inaccuracies on the lower optima. As a result, Algorithm 1 is unable to select the optimal subset elements from the lower-value modes and the model score suffers. The active sampling baseline yields a more uniform sampling distribution over inputs that results in a relatively uniform distribution of errors. While DiscoBAX does not perfectly estimate the value of the target function, its sampling strategy yields reasonably accurate estimates of all of the local optima. 5.2 GENEDISCO DATASET Datasets & baselines. The GeneDisco benchmark (Mehrjou et al., 2021) is comprised of five large-scale genome-wide CRISPR assays and compares the relative strengths of nine active learning algorithms (eg., Margin sampling, Coreset) for optimal experimental design. The objective of the different methods is to select the set of interventions (ie., genetic knockouts) with the largest impact on the corresponding disease phenotype. We include all existing baselines from the GeneDisco benchmark, as well as eight additional approaches: UCB, qUCB, qEI, qPOI, Thompson sampling, Top-K BAX, Levelset BAX, and DiscoBAX. Metrics & approach. We define the set of optimal interventions as the ones in the top percentile of the experimentally-measured phenotype (referred to as ‘Top-K interventions’). We use the TopK recall metric to assess the ability of the different methods to identify the best interventions. To quantify the diversity across the set of optimal interventions, we first cluster these interventions in a lower-dimensional subspace (details provided in Appendix C). We then measure the proportion of these clusters that are recalled (i.e., any of its members are selected) by a given algorithm over the different experiment cycles. The overall score of an approach is defined as the geometric mean between Top-K recall and the diversity metric. For all methods and datasets, we perform 25 consecutive batch acquisition cycles (with batch size 32). All experiments are repeated 10 times with different random seeds. Results & discussion. We observe that, across the different datasets, DiscoBAX enables to identify a more diverse set of optimal interventions relative to baselines (Table 1). It does so in a sample- efficient manner as it achieves higher diversity throughout the different acquisition cycles (Fig.3). Note that sample-efficiency is an empirical observation here not a theoretical property of the algorithm since it is possible to construct adversarial datasets where a BAX method will attain no better performance than random sampling. Interestingly, it tends to recall a higher share of optimal interventions on several assays as well, which may be the result of very steep extrema in the corresponding datasets. We also find the performance of DiscoBAX to be relatively insensitive to the choice of hyperparameters (Appendix D.3). Lastly, we note that when the input feature space (ie., the intervention representation) does not correlate much with the disease phenotype of interest, the model being learned tends to perform poorly and we observe no lift between the different methods and random sampling (eg., the SARS-CoV-2 assay from Zhu et al. (2021) – see Appendix D.2). 6 RELATED WORK Prior works have studied the application of genomic discovery and method development for diverse target generation. Bayesian optimization: Bayesian optimization (BO) is concerned with finding the global optimum of a function with the fewest number of function evaluations (Snoek et al., 2012; Shahriari et al., 2015). Since this target function is often expensive-to-evaluate, one typically uses a Gaussian process as a surrogate function (Srinivas et al.). The candidates for function evaluation are then determined through a so-called acquisition function, which is often expressed as the expected utility over the surrogate model. Typical choices include the expected improvement (Močkus, 1975, EI) and probability of improvement (Kushner, 1964, PI) as utility functions. Recent work includes variational approaches Song et al. (2022) which yield a tractable acquisition function whose limiting behavior is equivalent to PI. Prior work tried to obtain diversity in Bayesian optimization e.g. through a batch setting (Kirsch et al., 2019) or multi-objective optimization (Hernández-Lobato et al., 2016). Bayesian optimization has been applied to biological problem settings such as small molecule optimization (Korovina et al., 2020) or automatic chemical design (Griffiths & HernándezLobato, 2017). Optimal experiment design broadens the scope of Bayesian Optimization: rather than simply maximizing a parametric function, the task is to adaptively identify an optimal set of experiments to efficiently reach some goal (Robbins, 1952; Chernoff, 1959). Applying machine learning to automate hypothesis generation and testing goes back multiple decades (King et al., 2004). Optimal experiment design is amenable to Bayesian optimization (Greenhill et al., 2020) and reinforcement learning approaches (Kandasamy et al., 2019). Most related to our work is Bayesian Algorithm Execution (BAX) Neiswanger et al. (2021) that extends the goal of experiment design from only finding the maximum of a function to estimating more general properties such as level sets by computing the expected information gain (EIG) which is the mutual information between the evaluation of an input point and the statistics related that property. Active learning While many probabilistic models like Gaussian processes provide principled uncertainty estimates (Rasmussen, 2003), modern neural network architectures often rely on heuristics or only provide approximations approaches (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017). Active learning based approaches use the uncertainty estimates for maximizing expected information gains of model parameters (Houlsby et al., 2011). Recently, more and more approaches have used active learning based on model uncertainties of neural networks for biomedical applications. Bandits: The upper confidence bounds seen in BO originate in the bandit setting (Lai & Robbins, 1985), in which one can extend the widely-used UCB algorithm to Gaussian processes (Grünewälder et al., 2010; Srinivas et al.). While both bandits and BO seek to find the maximum of a function, the two problem settings leverage different notions of optimality. BO seeks to identify the argmax, whereas bandits seek to minimize the number of sub-optimal queries. Related to bandits and BO, some efforts are made to formulate active learning as a reinforcement learning problem (Slade & Branson, 2022; Casanova et al., 2020; Konyushkova et al., 2017; Pang et al., 2018). 7 CONCLUSION We have introduced a mathematical formalization of the drug discovery problem that captures the noise induced by moving from in vitro to in vivo experiments. We proposed a novel algorithm based on Bayesian Algorithm Execution and illustrated its utility on many illustrative synthetic datasets. We have further evaluated this class of methods against the real-world large-scale assays from the GeneDisco benchmark, where they help identify diverse top interventions better than existing baselines. Future work could see the extension of the current framework to explicitly account for the fact that experimental cycles happen in batches. Further, we assume in this work that distant representations of interventions implied different underlying biological mechanisms - a proper causal formulation of the problem would allow us to tell apart causally connected pathways more cleanly. Finally, it is typical practice to measure several potential intermediate phenotypes of interest to capture different aspects of interest, which requires an extension of our approach to the setting of multiple objectives. 8 REPRODUCIBILITY STATEMENT We clearly state our modelling assumptions throughout Sections 2 to 4. We provide proof for our theoretical claims in Appendix A. All experimental results reported in Section 5 and appendix D can be reproduced using the code available at: https://github.com/anonymous35780/ solaris-2023-iclr. Hyper-parameter sweeps for the BAX methods for GeneDisco are presented in Table 3.
1. What is the main contribution of the paper regarding the optimization of genomic interventions? 2. What are the strengths of the proposed approach, particularly in its application to real-world problems? 3. What are some weaknesses or limitations of the paper, such as the estimation of the noise distribution or the choice of score function? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper's contributions, such as considering alternative baseline strategies or providing more detailed descriptions of algorithmic details?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper considers the problem of finding a diverse set of genomic interventions that maximize a phenotype of interest. Formally, the problem can be modeled as optimizing expensive black-box functions over sets of inputs from a given design space. Bayesian Algorithm eXecution (BAX) is a recently proposed framework that allows estimating required properties of black-box functions using information gain based input acquisition strategies. The paper provides a BAX (Bayesian Algorithm eXecution) style approach to solve the problem where the key idea is to consider the subset maximization of a chosen score function as the Algorithm. At each iteration, an Expected Information Gain acquisition objective is optimized to select the next points for evaluation. This objective is parameterized by multiple sample outputs of the subset maximization algorithm. Experiments are performed on synthetic and GeneDisco benchmarks. Strengths And Weaknesses The problem considered in the paper is important with real-world implications. I found the biological motivation of the problem well-written with good intuitions for a non-domain person. The overall idea of using BAX style approach for this problem setting is fairly interesting and novel. However, I have few questions to understand some of the details better and some suggestions that will hopefully improve the paper's contributions: It is mentioned in the paper that the discrepancy between the disease outcome f o u t and intermediate phenotype f i p is captured by the noise distribution η . It is motivated as an important distinguishing factor between the problem setting of this paper compared to that of existing approaches like Bayesian optimization. However, there is little clear description about estimating this quantity or principles behind choosing a certain distribution. A short remark is mentioned after observation 1 in few lines but that seems limited for such an important motivation of the problem setting. Please describe the concrete implementation/algorithmic details of the noise distribution and how it is estimated in the experiments. The proof of submodularity of the score function in appendix A is missing some description. For example, d i s ( g , η ) is not defined. Please expand the proof and explain the reasoning behind the inequalities. If it is straightforward (as mentioned in the first line of the proof), can it be considered a major contribution of the paper? The score function is chosen to be the best value in a set (averaged over noise distribution eta). This seems like an optimistic choice. For the algorithm to find good robust points, should we not consider the worst value in the set as the score function? Please add a description of the computational complexity of the proposed approach. Since multiple instances of the subset maximization algorithm needs to be run in each iteration for estimating the expected information gain quantity, the computational complexity can be quite high. A wall-clock time comparison of the proposed approach with existing baselines will be very useful, especially since the top-k recall gap is relatively small on most of the benchmarks (other than Leukemia/NK cells). As mentioned in section 5.2, batch size of 32 points are selected for evaluation in each acquisition cycle. How is the batch of points selected? Is it accomplished by greedy optimization of the EIG acquisition function? If yes, what are the accuracy losses of doing this greedy optimization. The choice of UCB as Bayesian Optimization (BO) baseline strategy seems surprising. There is a large literature on Bayesian optimization algorithms (please see [1-4] and references therein) for the batch evaluation setting that should be the right comparison here. Something like qEI (q-Expected Improvement) is easy to setup and implemented in popular packages like BoTorch [1] (https://botorch.org/tutorials/). The points suggested by qEI are also known to find highly diverse points in the context of batch optimization. Please consider improving this comparison (by including qEI for instance) because BO algorithms are directly applicable and the most relevant baseline in the setting. References [1] Balandat, Maximilian, Brian Karrer, Daniel Jiang, Samuel Daulton, Ben Letham, Andrew G. Wilson, and Eytan Bakshy. "BoTorch: a framework for efficient Monte-Carlo Bayesian optimization." Advances in neural information processing systems 33 (2020): 21524-21538. González, J., Dai, Z., Hennig, P., & Lawrence, N. (2016, May). Batch Bayesian optimization via local penalization. In Artificial intelligence and statistics (pp. 648-657). PMLR. [2] Wu, J., & Frazier, P. (2016). The parallel knowledge gradient method for batch Bayesian optimization. Advances in neural information processing systems, 29. [3] Azimi, J., Fern, A., & Fern, X. (2010). Batch bayesian optimization via simulation matching. Advances in Neural Information Processing Systems, 23. [4] Gong, C., Peng, J., & Liu, Q. (2019, May). Quantile stein variational gradient descent for batch Bayesian optimization. In International Conference on Machine Learning (pp. 2347-2356). PMLR. Clarity, Quality, Novelty And Reproducibility The problem setting is clearly defined and the usage of BAX style approach is novel. It is commendable that source code is made available for easy reproducibility.
ICLR
Title Feature-Robust Optimal Transport for High-Dimensional Data Abstract Optimal transport is a machine learning problem with applications including distribution comparison, feature selection, and generative adversarial networks. In this paper, we propose feature-robust optimal transport (FROT) for highdimensional data, which solves high-dimensional OT problems using feature selection to avoid the curse of dimensionality. Specifically, we find a transport plan with discriminative features. To this end, we formulate the FROT problem as a min–max optimization problem. We then propose a convex formulation of the FROT problem and solve it using a Frank–Wolfe-based optimization algorithm, whereby the subproblem can be efficiently solved using the Sinkhorn algorithm. Since FROT finds the transport plan from selected features, it is robust to noise features. To show the effectiveness of FROT, we propose using the FROT algorithm for the layer selection problem in deep neural networks for semantic correspondence. By conducting synthetic and benchmark experiments, we demonstrate that the proposed method can find a strong correspondence by determining important layers. We show that the FROT algorithm achieves state-of-the-art performance in real-world semantic correspondence datasets. 1 INTRODUCTION Optimal transport (OT) is a machine learning problem with several applications in the computer vision and natural language processing communities. The applications include Wasserstein distance estimation (Peyré et al., 2019), domain adaptation (Yan et al., 2018), multitask learning (Janati et al., 2019), barycenter estimation (Cuturi & Doucet, 2014), semantic correspondence (Liu et al., 2020), feature matching (Sarlin et al., 2019), and photo album summarization (Liu et al., 2019). The OT problem is extensively studied in the computer vision community as the earth mover’s distance (EMD) (Rubner et al., 2000). However, the computational cost of EMD is cubic and highly expensive. Recently, the entropic regularized EMD problem was proposed; this problem can be solved using the Sinkhorn algorithm with a quadratic cost (Cuturi, 2013). Owing to the development of the Sinkhorn algorithm, researchers have replaced the EMD computation with its regularized counterparts. However, the optimal transport problem for high-dimensional data has remained unsolved for many years. Recently, a robust variant of the OT was proposed for high-dimensional OT problems and used for divergence estimation (Paty & Cuturi, 2019; 2020). In the robust OT framework, the transport plan is computed with the discriminative subspace of the two data matrices X ∈ Rd×n and Y ∈ Rd×m. The subspace can be obtained using dimensionality reduction. An advantage of the subspace robust approach is that it does not require prior information about the subspace. However, given prior information such as feature groups, we can consider a computationally efficient formulation. The computation of the subspace can be expensive if the dimensionality of data is high, for example, 104. One of the most common prior information items is a feature group. The use of group features is popular in feature selection problems in the biomedical domain and has been extensively studied in Group Lasso (Yuan & Lin, 2006). The key idea of Group Lasso is to prespecify the group variables and select the set of group variables using the group norm (also known as the sum of `2 norms). For example, if we use a pretrained neural network as a feature extractor and compute OT using the features, then we require careful selection of important layers to compute OT. Specifically, each layer output is regarded as a grouped input. Therefore, using a feature group as prior information is a natural setup and is important for considering OT for deep neural networks (DNNs). In this paper, we propose a high-dimensional optimal transport method by utilizing prior information in the form of grouped features. Specifically, we propose a feature-robust optimal transport (FROT) problem, for which we select distinct group feature sets to estimate a transport plan instead of determining its distinct subsets, as proposed in (Paty & Cuturi, 2019; 2020). To this end, we formulate the FROT problem as a min–max optimization problem and transform it into a convex optimization problem, which can be accurately solved using the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013). The FROT’s subproblem can be efficiently solved using the Sinkhorn algorithm (Cuturi, 2013). An advantage of FROT is that it can yield a transport plan from high-dimensional data using feature selection, using which the significance of the features is obtained without any additional cost. Therefore, the FROT formulation is highly suited for high-dimensional OT problems. Through synthetic experiments, we initially demonstrate that the proposed FROT is robust to noise dimensions (See Figure 1). Furthermore, we apply FROT to a semantic correspondence problem (Liu et al., 2020) and show that the proposed algorithm achieves SOTA performance. Contribution: • We propose a feature robust optimal transport (FROT) problem and derive a simple and efficient Frank–Wolfe based algorithm. Furthermore, we propose a feature-robust Wasserstein distance (FRWD). • We apply FROT to a high-dimensional feature selection problem and show that FROT is consistent with the Wasserstein distance-based feature selection algorithm with less computational cost than the original algorithm. • We used FROT for the layer selection problem in a semantic correspondence problem and showed that the proposed algorithm outperforms existing baseline algorithms. 2 BACKGROUND In this section, we briefly introduce the OT problem. Optimal transport (OT): The following are given: independent and identically distributed (i.i.d.) samples X = {xi}ni=1 ∈ Rd×n from a d-dimensional distribution p, and i.i.d. samples Y = {yj}mj=1 ∈ Rd×m from the d-dimensional distribution q. In the Kantorovich relaxation of OT, admissible couplings are defined by the set of the transport plan: U(µ, ν) = {Π ∈ Rn×m+ : Π1m = a,Π>1n = b}, where Π ∈ Rn×m+ is called the transport plan, 1n is the n-dimensional vector whose elements are ones, and a = (a1, a2, . . . , an)> ∈ Rn+ and b = (b1, b2, . . . , bm)> ∈ Rm+ are the weights. The OT problem between two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj determines the optimal transport plan of the following problem: min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijc(xi,yj), (1) where c(x,y) is a cost function. For example, the squared Euclidean distance is used, that is, c(x,y) = ‖x − y‖22. To solve the OT problem, Eq. (1) (also known as the earth mover’s distance) using linear programming requires O(n3), (n = m) computation, which is computationally expensive. To address this, an entropic-regularized optimal transport is used (Cuturi, 2013). min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijc(xi,yj) + H(Π), where ≥ 0 is the regularization parameter, and H(Π) = ∑ni=1 ∑m j=1 πij(log(πij) − 1) is the entropic regularization. If = 0, then the regularized OT problem reduces to the EMD problem. Owing to entropic regularization, the entropic regularized OT problem can be accurately solved using Sinkhorn iteration (Cuturi, 2013) with a O(nm) computational cost (See Algorithm 1). Wasserstein distance: If the cost function is defined as c(x,y) = d(x,y) with d(x,y) as a distance function and p ≥ 1, then we define the p-Wasserstein distance of two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj as Wp(µ, ν) = min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijd(xi,yj) p 1/p . Recently, a robust variant of the Wasserstein distance, called the subspace robust Wasserstein distance (SRW), was proposed (Paty & Cuturi, 2019). The SRW computes the OT problem in the discriminative subspace. This can be determined by solving dimensionality-reduction problems. Owing to the robustness, it can compute the Wasserstein from noisy data. The SRW is given as SRW(µ, ν) = min Π∈U(µ,ν) max U∈Rd×k,U>U=Ik n∑ i=1 m∑ j=1 πij‖U>xi −U>yj‖22 1 2 , (2) where U is the projection matrix with k ≤ d, and Ik ∈ Rk×k is the identity matrix. The SRW or its relaxed problem can be efficiently estimated using either eigenvalue decomposition or the Frank–Wolfe algorithm. 3 PROPOSED METHOD This paper proposes FROT. We assume that the vectors are grouped as x = (x(1) > , . . . ,x(L) > )> and y = (y(1) > , . . . ,y(L) > )>. Here, x(`) ∈ Rd` and y(`) ∈ Rd` are the d` dimensional vectors, where ∑L `=1 d` = d. This setting is useful if we know the explicit group structure for the feature vectors a priori. In an application in L-layer neural networks, we consider x(`) and y(`) as outputs of the `th layer of the network. If we do not have a priori information, we can consider each feature independently (i.e., d1 = d2 = . . . = dL = 1 and L = d). All proofs in this section are provided in the Appendix. 3.1 FEATURE-ROBUST OPTIMAL TRANSPORT (FROT) The FROT formulation is given by min Π∈U(µ,ν) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ `=1 α`c(x (`) i ,y (`) j ), (3) where ΣL = {α ∈ RL+ : α>1L = 1} is the probability simplex. The underlying concept of FROT is to estimate the transport plan Π using distinct groups with large distances between {x(`)i }ni=1 and {y(`)j }mj=1. We note that determining the transport plan in nondistinct groups is difficult because the data samples in {x(`)i }ni=1 and {y (`) j }mj=1 overlap. By contrast, in distinct groups, {x (`) i }ni=1 and {y(`)j }mj=1 are different, and this aids in determining an optimal transport plan. This is an intrinsically similar idea to the subspace robust Wasserstein distance (Paty & Cuturi, 2019), which estimates the transport plan in the discriminative subspace, while our approach selects important groups. Therefore, FROT can be regarded as a feature selection variant of the vanilla OT problem in Eq. (1), whereas the subspace robust version uses dimensionality-reduction counterparts. Algorithm 1 Sinkhorn algorithm. 1: Input: a, b,C, , tmax 2: Initialize K = e−C/ ,u = 1n,v = 1m, t = 0 3: while t ≤ tmax and not converge do 4: u = a/(Kv) 5: v = b/(K>u) 6: t = t+ 1 7: end while 8: return Π = diag(u)Kdiag(v) Algorithm 2 FROT with the Frank–Wolfe. 1: Input: {xi}ni=1, {yj}mj=1, η, and . 2: Initialize Π, compute {C`}L`=1. 3: for t = 0 . . . T do 4: Π̂ = argminΠ∈U(µ,ν)〈Π,MΠ(t)〉 + H(Π) 5: Π(t+1) = (1− γ)Π(t) + γΠ̂ 6: with γ = 22+t . 7: end for 8: return Π(T ) Using FROT, we can define a p-feature robust Wasserstein distance (p-FRWD). Proposition 1 For the distance function d(x,y), FRWDp(µ, ν) = min Π∈U(µ,ν) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ `=1 α`d(x (`) i ,y (`) j ) p 1/p , (4) is a distance for p ≥ 1. Note that we can show that 2-FRWD is a special case of SRW with d(x,y) = ‖x − y‖2 (See Appendix). The key difference between SRW and FRWD is that FRWD can use any distance, while SRW can only use d(x,y) = ‖x− y‖2. 3.2 FROT OPTIMIZATION Here, we propose two FROT algorithms based on the Frank–Wolfe algorithm and linear programming. Frank–Wolfe: We propose a continuous variant of the FROT algorithm using the Frank–Wolfe algorithm, which can be fully differentiable. To this end, we introduce entropic regularization for α and rewrite the FROT as a function of Π. Therefore, we solve the following problem for α: min Π∈U(µ,ν) max α∈ΣL Jη(Π,α),with Jη(Π,α) = n∑ i=1 m∑ j=1 πij L∑ `=1 α`c(x (`) i ,y (`) j )− ηH(α), where η ≥ 0 is the regularization parameter, and H(α) = ∑L`=1 α`(log(α`) − 1) is the entropic regularization for α. An advantage of entropic regularization is that the nonnegative constraint is naturally satisfied, and the entropic regularizer is a strong convex function. Lemma 2 The optimal solution of the optimization problem α∗ = argmax α∈ΣL Jη(Π,α), with Jη(Π,α) = L∑ `=1 α`φ` − ηH(α) with a fixed admissible transport plan Π ∈ U(µ, ν), is given by α∗` = exp ( 1 ηφ` ) ∑L `′=1 exp ( 1 ηφ`′ ) with Jη(Π,α∗) = η log ( L∑ `=1 exp ( 1 η φ` )) + η. Using Lemma 2 (or Lemma 4 in Nesterov (2005)) together with the setting φ` =∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) i ) = 〈Π,C`〉, [C`]ij = c(x (`) i ,y (`) i ), the global problem is equivalent to min Π∈U(µ,ν) Gη(Π), with Gη(Π) = η log ( L∑ `=1 exp ( 1 η 〈Π,C`〉 )) . (5) Note that this is known as a smoothed max-operator (Nesterov, 2005; Blondel et al., 2018). Specifically, regularization parameter η controls the “smoothness” of the maximum. Proposition 3 Gη(Π) is a convex function relative to Π. The derived optimization problem of FROT is convex. Therefore, we can determine globally optimal solutions. Note that the SRW optimization problem is not jointly convex (Paty & Cuturi, 2019) for the projection matrix and the transport plan. In this study, we employ the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013), using which we approximate Gη(Π) with linear functions at Π(t) and move Π toward the optimal solution in the convex set (See Algorithm 2). The derivative of the loss function Gη(Π) at Π(t) is given by ∂Gη(Π) ∂Π ∣∣∣∣ Π=Π(t) = L∑ `=1 α (t) ` C` =MΠ(t) with α (t) ` = exp ( 1 η 〈Π(t),C`〉 ) ∑L `′=1 exp ( 1 η 〈Π(t),C`′〉 ) . Then, we update the transport plan by solving the EMD problem: Π(t+1) = (1− γ)Π(t) + γΠ̂ with Π̂ = argmin Π∈U(µ,ν) 〈Π,MΠ(t)〉, where γ = 2/(2+ k). Note thatMΠ(t) is given by the weighted sum of the cost matrices. Thus, we can utilize multiple features to estimate the transport plan Π for the relaxed problem in Eq. (5). Using the Frank–Wolfe algorithm, we can obtain the optimal solution. However, solving the EMD problem requires a cubic computational cost that can be expensive if n and m are large. To address this, we can solve the regularized OT problem, which requires O(nm). We denote the Frank–Wolfe algorithm with EMD as FW-EMD and the Frank–Wolfe algorithm with Sinkhorn as FW-Sinkhorn. Computational complexity: The proposed method depends on the Sinkhorn algorithm, which requires an O(nm) operation. The computation of the cost matrix in each subproblem needs an O(Lnm) operation, where L is the number of groups. Therefore, the entire complexity is O(TLnm), where T is the number of Frank–Wolfe iterations (in general, T = 10 is sufficient). Proposition 4 For each t ≥ 1, the iteration Π(t) of Algorithm 2 satisfies Gη(Π (t))−Gη(Π∗) ≤ 4σmax(Φ >Φ) η(t+ 2) (1 + δ), where σmax(Φ>Φ) is the largest eigenvalue of the matrix Φ>Φ and Φ = (vec(C1), vec(C2), . . . , vec(CL))>; and δ ≥ 0 is the accuracy to which internal linear subproblems are solved. Based on Proposition 4, the number of iterations depends on η, , and the number of groups. If we set a small η, convergence requires more time. In addition, if we use entropic regularization with a large , the δ in Proposition 4 can be large. Finally, if we use more groups, the largest eigenvalue of the matrix Φ>Φ can be larger. Note that the constant term of the upper bound is large; however, the Frank–Wolfe algorithm converges quickly in practice. Linear Programming: Because limη→0+ Gη(Π) = max`∈{1,2,...,L} ∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) j ), the FROT problem can also be written as min Π∈U(µ,ν) max `∈{1,2,...,L} n∑ i=1 m∑ j=1 πijc(x (`) i ,y (`) j ). (6) Because the objective is the max of linear functions, it is convex with respect to Π. We can solve the problem via linear programming: min Π∈U(µ,ν),t t, s.t. 〈Π,C`〉 ≤ t, ` = 1, 2, . . . , L. (7) This optimization can be easily solved using an off-the-shelf LP package. However, the computational cost of this LP problem is high in general (i.e., O(n3), n = m). 3.3 APPLICATION: SEMANTIC CORRESPONDENCE We applied our proposed FROT algorithm to semantic correspondence. The semantic correspondence is a problem that determines the matching of objects in two images. That is, given input image pairs (A,B), with common objects, we formulated the semantic correspondence problem to estimate the transport plan from the key points in A to those in B; this framework was proposed in (Liu et al., 2020). In Figure 2, we show an overview of our proposed framework. Cost matrix computation C`: In our framework, we employed a pretrained convolutional neural network to extract dense feature maps for each convolutional layer. The dense feature map of the `th layer output of the sth image is given by f (`,s) s,q+(r−1)hs ∈ R d` , q = 1, 2, . . . , hs, r = 1, 2, . . . , ws, ` = 1, 2, . . . , L, where ws and hs are the width and height of the sth image, respectively, and d` is the dimension of the `th layer’s feature map. Note that because the dimension of the dense feature map is different for each layer, we sample feature maps to the size of the 1st layer’s feature map size (i.e., hs ×ws). The `th layer’s cost matrix for images s and s′ is given by [C`]ij = ‖f (`,s)i − f (`,s′) j ‖22, i = 1, 2, . . . , wshs, j = 1, 2, . . . , ws′hs′ . Source image Target image CAM Feature Robust Optimal Transport (FROT)x (`) <latexit sha1_base64="L3BU/IT3q3uES8MG5EgG5Qq4+XQ=">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yeXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/Bx/eYmM=</latexit> y(`) <latexit sha1_base64="YQVQlZDTAKyi4yW8oPOIIFbJkMw=">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yfXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/B6rAYmQ=</latexit> C`<latexit sha1_base64="OacnRizJRBX89wDWgRojPha2Xaw=">AACLWnicvZ1Lb2NJFYBrhtc8gOlhWCDBIkzT0IOalrsBDRpAmjyd7nR3nDjv8XTkx43jaef6jh+J7Yw3LOkfwIIVSEggVvAX2PAHWIzEH0AsB4kNC+px69Z9VZ1zfN2TKIlTPuc7p6pOPW7VveVG0O0MhqXSpy+9/IUvfunLX3nl1dde/+rXvv7GjTe/cTDojfpNb7/Z6/b6R436wOt2fG9/2Bl2vaOg79UvGl3vsPFsVbx/eOn1B52evzecBN6HF/W23znrNOtDnnR64zu1oTceSs5132vNrmuNXre1elrzut3Z6Y2bpbsl+bWUfXEvfHGThV+V3pvrb7Eaa7Eea7IRu2Ae89mQv+6yOhvw7w/YPVZiAU/7kF3ztD5/1ZHve2zGXuO6Iy7lcYk6T33Gf7f5fx+EqT7/XzAHUrvJrXT5T59rLrFbpX+U/lT6rPT30p9L/yr9z8q6lgzhy4T/bShdLzh949ffqv4X1Lrgf4fs3Gg5Nbr8lfCvI3UuZE7v5KQLmivvQ3bGfibz3OFlEMgUURpN5cfl9DefVd/bvXX9/dLvS//m5fC70qelv/GS8C//0/zDjrf7W04XfJ9rXUmbnrTvcdvXvLY8zu9w1jVbC1/70rcOl/LDesnXDfirHv89435qjvCzEqb3eI5wpK78e5EhPQrT605t/X9aey8h5yI05essYTVM78oY7vOocVE8NpaRkqash+lBLNLzCWey9ocZwkaUbtfVkZXWXc6JuNfYLUdZiLx2cvKxGntHEJYihig/VU8+17jm6ZqqYqTHGTPZ/j+U7/o8pSNlVR8h0hpcZondDqOnJ/9Tv5P1uMRucs47/O8M4YeO0UX4kR/XNH90pC/CH9M64j7U+Pc8fvgL9wT2Qre7RZRGtq3mlQoUKcqHPOs12e+O+W/hw3XCh3dkX4zlezJNWTmXrUv4/D3+3wp/fyxfXfIYU2OBGE3uh70jXKI7vLdZC9l9PlZ0pf67ctSdxV7NEqNCljPgfYYfcvS415f9g37HnVvdb/iyx1PEYTi2iXbczSErnZmU/yVoQdRxW46raT9FPGXpcfkaH4lnaAt1XicUC0IeZ6EvXz9LRH2SaSRUmz3j792WcV2TPXqbSw5l/LnsDOU8yWZDvXs7bC1QvXbk3MROMxJFPB7IempaIlC/V2M/Bzgj2Y774WimPf4k9OcTtPYgoW/SryVpxp7yNupmCZ/Fe6om7oekrpRoy5JS/dUd/vt+FBvtzJwhj9yJ+hO4lXXYXfnT4j+zSKsT9S/uHLTkHDW/NQQydwF/9d3Yj06F+5yAvxpJdgD2pa2wXwnYDwDZSkStgJKaWgGpgexJVL/wMTuVvUWXp56n5qpZTV9eGwWyt6iHkXUdzo9F7/75fP+Uf7trQ9dug3+vpmpcpF3LVHde4/LruYx1EuNJLuMJibGby9glMaq5jCoixhtyhtJiU9kqepITT9dXpj3ZdkuIMURr9qKx1s67F/EwxGWAtkzwbgVgrRBYqwBrlcBaA1hrBNY6wFonsDYA1gaBVQZYZQJrE2BtElgPANYDAushwHpIYG0BrC0C6xHAekRgPQZYjwmsJwDrCYG1DbC2CawKwKoQWDsAa4fA2gVYuwRWFWBVCaw9gLVHYO0DrH0C6wBgHRBYhwDrkMA6AlhHBNYxwDomsE4A1glp5K4DtDrBswbAahBYTYDVJLBaAKtFYEHzJo/AOgNYZwRWG2C1CaxzgHVOYHUAVofA+ghgfURgPQNYzwisLsDqElgXAOuCwPIBlk9gQdcfPQIrAFgBgfUxwPqYwOoDrD6BNQBYAwJrCLCGBNYIYI0IrEuAdUlgXQGsKwJrDLDGBNYEYE0IrCnAmpJG7mm4GptMe8riqw3xVVu8l8NoJR7iejGPcbMNs/LlKonkGhl+/uHJNU2IbeTwo2ud6d1lNzsuiZ+fiPVXjOdxSfyMJZCrtuIeCGhEqGWk8VGDK/spuexxXCpV7TpjyHFJ/Kynh2IbOfy8pS73qGF2XBI/k6lLBnxdUEvJ4uc30AhUC2XwsxyY6JOIY3DWWgtl8DMemBiQiH25mwIxtRR+1tIJ9yUhclwS3+bqiLrSUvgZDaV/S0vja/AcVYfnpFpsoqhNIlXlEPZ1QKKKdE+ObxA5Lomll9EjbFwSS19Dj7BxSfzKGnYs2ZtjLHlE6JOTsviVLjhajkixUkEQKyRiFd0zVefomfZJfUhaGl8qmNZeIbb2Cqq1V4itfRvd2uOS+CvHurzrgFLq+To0i/hZX1aeZgkz4sclaXTc6J+UpVnAzwSy8tSSwrSKpKyysOS00pTXkGZnWl+7qnTcnrSSXbEyMHvRSnbVysDsQSvZNSsDs/esZNetDMyes5LdsDIwe81KtmxlYPaYleymlYHZW1ayD6wMzJ6ykn1oZWD2kpXslpWB2UNWso+sDMzesZJ9bGVg9oyV7BMrA7NXrGS3rQzMHrGSrVgZmL1hJbtjZWD2hJXsrpWB2QtWslUrA7MHrGT3rAzM3q+S3bcyMHu+SvbAysDs9SrZQysDs8erZI+sDMzerpI9tjIwe7pK9sTKwO3l6twMYzsYeTlyrw3jcuuycbwQG0fR+jMtHx4xH3Yb9nzEbbismHmj2Uu9kvfeenJNe5iSgv02ssMMcSjnw+IVlZpdDXX5m7d2SrcA+V/EypSQC/xKtJ5Juqhmton1dR3lJeVOwDKKSLmHbx9FxPT8rtxi8viRQ5+y798gRAh9n6hBjPG0BZeNeF9k7mhJ9lCtGOnzuj9cfbt9V6uHdflcgH5CxozBWM1JQvOYoDlNaJ4Amubp2Vb4tIMqWZeOWJkQa9FN+VdrXZC1zkNPs1GZZ8FN92VsDcJ7XSBffGaeU2gSS0jXaTr9NCdGMbvvac6EwMfsyKc5UwIft0vflzpify++kiGs9MGY0JozJwnHcMVTPhdLvuA/aj/a5edp2N+Zfg5qD11HCeTR3H2AeIJSPYM7ydQE9OyQOptAPBmp7F8yL8OAnhRKejADvHrKarLFDklUvQPqIudFsp7PzmdV3wtJtdoqbHWYiunk2Oqq80X4kxcVpiygmHlRHpgYmMcDaiTYysDe28BtyVUylAix++CKDIqF+eOP2qrny4urhqltbXEexOvyFtoHM8oU7eXmtVu0n6OUeHpMdZWxixrI56d70f2Bao/JrdGT7d5Lac7rgcdbeCfUbiJKEBrJx9HomZ5NTsAynlp0pwhdPbPN35cag/oTp762Dz3trFfd4Ked9boW/LRzEJVpYJmn30HNpYOofPM5UzRn7PQG54nLDwzBlRPKvb5jlr07d0zQTd+BOwbr01gcF7gL2Ni2UTC92iTyZVLAl0nki42C8SV5x4CplWT6/F6m+R6Bj/Hf+OwjJL1QEooWJTsKpV3jJo4UIEi0ORY8AuXb6pNmsmmr84311Pmz2as5KhB5ZqfERqE8jXbEsj1WEe/i3HRvtgh/Jzn+Fult4ty0v0X6n7KMJMEroyTtM25NguZKZswwc2aTVrzkx7E5McSllPwYuLKKj6PYMhjllMFoAWUwyikDG5dSBpc5/l4uwN/LHH9tXIq/qnztMWvkoD1VfKkXsYbJm4mZIpFiIgSOC/f4qjm20RWTJz/KU5ExWoyI8fMpzWlELh1zgpHQaIA2jLzSgEZTcYWpT47F7F74DFpFFLmCyto9qxBa6t7JNXAGomR1/awViDnN0hFjY+FyoVuTfS8jLoNrwem1fbcN7HwqS0mupGByQ7WQ7U9dOyDFZqlZC9lRgmKdNlvNWoB74EWW8rk1p+cFesT8MqXbooxfyXylV/tcUbzofBa3Tcm3eqYh22JUevG8KU62Tbj4NP8HFv8HC/J/YPHfzo8/e223sCL3VNthWt4q6AriRLsz1gr3bPrytPH4mm6Stsyq0Wl/0Am0Z9Z+5Azh0Uh6Yc4MF1rYHi2tnWwNZ0RtKBfu9TYdVcVW7aAZzbjQjNhcSRa5fpxGeV3EOQVwnpNXjIuw6Mt7Q4pZxK1Zxmdx+v8ia5TxGaabR+kXk2uc2f5xUWusdnu42kg/Nbao1d54Db2I1eR4jS1mNTnrv30u9yJLzV1vi7Ns5jV58/VFcPNm4vPXz3Lk53IB75Yjr2wUjC9HkS/FVqmh+oafTXDzdV4XswqejZPFrILje6ti5WEo2chcRAmVc0qovAB/yzn+2rgUf5dz/C3Stgwj62+R1qYZW9LfrM9bjuuMOwXzomxm80O1Sc8n3h6ei88Hbb8p6+9i9pvwPcOkUM+QjKwXEU/JOHoR0bOF8h+imLosdn8CVHPF6svc2Vtk/9F+RyR895P7LkR77jB+6ec/4vfOqpT5S0w/GZJmHhdmnmSYJ4WZ8RHanXd8eR5nmLa845knGaYt7xSmu8czT+pAKygVps8droRe6RUg+N6VClN7NBX0ylEQWQvI1vSpbQHamh9aw9zPU2eYe4R0nDVScQfxtVw9pYe1hqntI0QZGj+wROzaji4R/B13RrKe0cVpbvK2o/uV/GtwI4mfUZyjqeeouI1z46ftQVzsvB/rLeXJ1Taa2iaVQRtdBpgTt32m1+R74ep6l6lPQtOnM+N2aPcY5nlWygn/bRSRcqr4JopIjfPFxuIKiog59ceVW0weXXmj5GiTYZ9FppU8lkrxdQ1NpXzWziGaSvt0CCwVc5YI5CPGM6j0MGW2AzAwZ97sOCIXow9FLCZOodrB1MlD63XPQ4Sua3YC6x9YbR+AuhtW3Q1Qt2zVLYO6W+H5U+mrgq3w9ClI174PgSW4Z4QYipm/dRz+xKWwRHOeOcxNnn1Oo+PWkfAWMH7jaeaUQNc4SzlLMH4KtYuZd1o1pmSrmeuCau61L+WzbarsCsWkfObBYcbPw8J+Hmb8zGdi/NT+ze+V9mZ+H0xpd5G12iXWaxdZs11i3Wb9zS8Fmr+HOf7auLg67jprGeed8aqIL1c8XX+C8CyMD6y8rae7ivoN95PE4o6r9H3SyVOxXZ7ka5vTqly6q2ydzxC2w9PxzOeQq5JblbOH00z6IViaFYBcKcDW5G2Sz9sEn/PJdp9h9oazNDYKlLObXCnAFuQVp9cr1vLAeu7iVwpaWObztweyTvPo8XchT92kSobloiXvHHwM2C6HfclyNFfWz3yJnu62TH8HzIGmPI4+XThJEel4ykjm1ObROtEnF8tP0dzrH4LWZnp9LN0Xt2UcQavdQl99zrl6UrAtdZ6yHyJmvdPItpnfnrIS14asDuSnTKQ/+1xFl3kP8tzYjz8z4a4DkeaHn8tlLMdPVlY1kfZKPDFjakWfHbfE66oXai/xlitKscN/N+S4NGDvyRO4taxuCSJVnfIiWpV5f4//f8R+JC3rkyaE7NsyRax6vh2TXpe5eu/0xs17d0vya8n+4uD+3Xs/vnt/5yc33/8FU1+vsG9z3m12j73L3mebvFXv85z8iv2R/YX99df/fP7y81efv65EX34p1HmLJb6ef/P/i9dl0A==</latexit> a <latexit sha1_base64="MoFVAJMAn0YI/DAGDrQZ+yOSN10=">AACLQnicvZ1JcyNJFYBzhm1olplhLkRwMdM09BBNh7qBGKIJIsar3O3utmx5H087JLksq1su1WixJXn0B/rEjQBOEMGB4GfAgT/AYX4CcCKGCC5zIJfKytoy33uq6rHDtpx673svM18ulVmVagbdzmBYqXz62utf+vJXvvq1N75+4xvf/Na333zr7e/sDXqjfsvbbfW6vf5BszHwuh3f2x12hl3vIOh7jYtm19tvvlgW7+9fev1Bp+fvDCeB99FFo+13zjqtxpAn7R03e93TxslbNyt3K/JrIfviXvjiJgu/ar23V99hx+yU9ViLjdgF85jPhvx1lzXYgH9/yO6xCgt42kfsmqf1+auOfN9jM3aD6464lMclGjz1Bf/d5v99GKb6/H/BHEjtFrfS5T99rrnAblX+Uflz5bPK3yt/qfyz8rmVdS0ZwpcJ/9tUul5w8ubL79b/B2pd8L9Ddm60nBpd/kr415E6FzKnd3LSBc2V9yE7Y7+Qee7wMghkiiiNlvLjcvqbz+oPtm9d/7Dyx8q/eTn8ofJp5a+8JPzL/7b+tOVt/57TBd/nWlfSpifte9z2Na8tj/M7nHXNVsLXvvStw6X8sF7ydQP+qsd/z7ifmiP8rIXpPZ4jHKkr/15kSI/D9IZTW/+f1t5JyLkILfk6S1gO07syhvs8alwUj41lpKQpq2F6EIv0fMKZrP1hhrAWpdt1dWSldRdzIu4Gu+UoC5HXTk4+lmPvCMJCxBDlp+rJ5xrXPF1TVYz0OGMm2/9H8l2fp3SkrOojRFqTyyyw22H09OR/6neyHhfYTc55j/+dIfzQMVqGH/lxTfNHR3oZ/pjWEffhmH/P44dfuiewF7rdlVEa2baaVypQpCgf8qwfy353zH8LH64TPrwn+2Is35Npysq5bF3C5x/w/5b4+2P56pLHmBoLxGhyP+wd4RLd4r3NSsju87GiK/Xfl6PuLPZqlhgVspwB7zP8kKPHvb7sH/Q77tzqfsOXPZ4iDsOxTbTjbg5Z6cyk/K
1. What is the focus and contribution of the paper on robust OT/p-wasserstein-dist? 2. What are the strengths and weaknesses of the proposed formulation and solution method? 3. Do you have any concerns regarding the restriction on non-overlapping groups? 4. How does the reviewer assess the novelty and technical contribution of the paper compared to prior works? 5. What are the suggestions for improving the comparisons between FROT and SRW? 6. Why is T set in an ad hoc manner, and what would be a better approach? 7. Would it be beneficial to include visual examples of images/pairs that demonstrate the advantage of FROT over SRW?
Review
Review This work proposes variants of robust OT/p-wasserstein-dist (3)/(4), where the ground cost is in some sense the maximum over costs with (prefixed) groups of features. The motivation is similar to that for feature selection: where perhaps only few of these groups of features are critical/sufficient for OT purposes. So it can also be understood as joint feature-group selection with OT. The resulting convex problem is proposed to be solved using FW, whose details are presented (including convergence). Pros: Though similar in spirit to SRW, the proposed formulation has few advantages: a) allows any cost, b) convex c) FW leads to scalable solver etc. Overall, the paper is very well-written, with nice organization and sufficient details. Cons: Pre-fixed groups, more importantly, non-overlapping groups seems restrictive, especially because feature selection with overlapping groups is well-studied. (e.g., https://hal.inria.fr/inria-00628498/document , https://papers.nips.cc/paper/4275-efficient-methods-for-overlapping-group-lasso.pdf ) among others. Major Comments: Given SRW and other robust/min-max OT works, and multitude of feature-selection/group-lasso works, the novelty seems restricted. Even in terms of optimization, it seems a straight-forward application of FW. This seems to restrict the technical contribution. In section 5.2, I am assuming for FROT, all layers were used as input; whereas for SRW, only few are used. Is this the case? If so, perhaps a case of FROT which uses exactly same input as SRW must be included for a fair comparison (along with the FROT with all layers). The authors do seem to agree that the improvement is more because of this skew in inputs. It is will nice to clarify this. Why is that T is set in an adhoc manner? for example T=10 in synthetic and T=3 in real-world? why not fix or validate ? Also, convergence plots showing obj vs T as well as accuracy vs T might be insightful when included. It may also be insightful to visually see some critical examples of image/pairs that highlight why FROT may work better than SRW etc. (more like fig5 in appendix)
ICLR
Title Feature-Robust Optimal Transport for High-Dimensional Data Abstract Optimal transport is a machine learning problem with applications including distribution comparison, feature selection, and generative adversarial networks. In this paper, we propose feature-robust optimal transport (FROT) for highdimensional data, which solves high-dimensional OT problems using feature selection to avoid the curse of dimensionality. Specifically, we find a transport plan with discriminative features. To this end, we formulate the FROT problem as a min–max optimization problem. We then propose a convex formulation of the FROT problem and solve it using a Frank–Wolfe-based optimization algorithm, whereby the subproblem can be efficiently solved using the Sinkhorn algorithm. Since FROT finds the transport plan from selected features, it is robust to noise features. To show the effectiveness of FROT, we propose using the FROT algorithm for the layer selection problem in deep neural networks for semantic correspondence. By conducting synthetic and benchmark experiments, we demonstrate that the proposed method can find a strong correspondence by determining important layers. We show that the FROT algorithm achieves state-of-the-art performance in real-world semantic correspondence datasets. 1 INTRODUCTION Optimal transport (OT) is a machine learning problem with several applications in the computer vision and natural language processing communities. The applications include Wasserstein distance estimation (Peyré et al., 2019), domain adaptation (Yan et al., 2018), multitask learning (Janati et al., 2019), barycenter estimation (Cuturi & Doucet, 2014), semantic correspondence (Liu et al., 2020), feature matching (Sarlin et al., 2019), and photo album summarization (Liu et al., 2019). The OT problem is extensively studied in the computer vision community as the earth mover’s distance (EMD) (Rubner et al., 2000). However, the computational cost of EMD is cubic and highly expensive. Recently, the entropic regularized EMD problem was proposed; this problem can be solved using the Sinkhorn algorithm with a quadratic cost (Cuturi, 2013). Owing to the development of the Sinkhorn algorithm, researchers have replaced the EMD computation with its regularized counterparts. However, the optimal transport problem for high-dimensional data has remained unsolved for many years. Recently, a robust variant of the OT was proposed for high-dimensional OT problems and used for divergence estimation (Paty & Cuturi, 2019; 2020). In the robust OT framework, the transport plan is computed with the discriminative subspace of the two data matrices X ∈ Rd×n and Y ∈ Rd×m. The subspace can be obtained using dimensionality reduction. An advantage of the subspace robust approach is that it does not require prior information about the subspace. However, given prior information such as feature groups, we can consider a computationally efficient formulation. The computation of the subspace can be expensive if the dimensionality of data is high, for example, 104. One of the most common prior information items is a feature group. The use of group features is popular in feature selection problems in the biomedical domain and has been extensively studied in Group Lasso (Yuan & Lin, 2006). The key idea of Group Lasso is to prespecify the group variables and select the set of group variables using the group norm (also known as the sum of `2 norms). For example, if we use a pretrained neural network as a feature extractor and compute OT using the features, then we require careful selection of important layers to compute OT. Specifically, each layer output is regarded as a grouped input. Therefore, using a feature group as prior information is a natural setup and is important for considering OT for deep neural networks (DNNs). In this paper, we propose a high-dimensional optimal transport method by utilizing prior information in the form of grouped features. Specifically, we propose a feature-robust optimal transport (FROT) problem, for which we select distinct group feature sets to estimate a transport plan instead of determining its distinct subsets, as proposed in (Paty & Cuturi, 2019; 2020). To this end, we formulate the FROT problem as a min–max optimization problem and transform it into a convex optimization problem, which can be accurately solved using the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013). The FROT’s subproblem can be efficiently solved using the Sinkhorn algorithm (Cuturi, 2013). An advantage of FROT is that it can yield a transport plan from high-dimensional data using feature selection, using which the significance of the features is obtained without any additional cost. Therefore, the FROT formulation is highly suited for high-dimensional OT problems. Through synthetic experiments, we initially demonstrate that the proposed FROT is robust to noise dimensions (See Figure 1). Furthermore, we apply FROT to a semantic correspondence problem (Liu et al., 2020) and show that the proposed algorithm achieves SOTA performance. Contribution: • We propose a feature robust optimal transport (FROT) problem and derive a simple and efficient Frank–Wolfe based algorithm. Furthermore, we propose a feature-robust Wasserstein distance (FRWD). • We apply FROT to a high-dimensional feature selection problem and show that FROT is consistent with the Wasserstein distance-based feature selection algorithm with less computational cost than the original algorithm. • We used FROT for the layer selection problem in a semantic correspondence problem and showed that the proposed algorithm outperforms existing baseline algorithms. 2 BACKGROUND In this section, we briefly introduce the OT problem. Optimal transport (OT): The following are given: independent and identically distributed (i.i.d.) samples X = {xi}ni=1 ∈ Rd×n from a d-dimensional distribution p, and i.i.d. samples Y = {yj}mj=1 ∈ Rd×m from the d-dimensional distribution q. In the Kantorovich relaxation of OT, admissible couplings are defined by the set of the transport plan: U(µ, ν) = {Π ∈ Rn×m+ : Π1m = a,Π>1n = b}, where Π ∈ Rn×m+ is called the transport plan, 1n is the n-dimensional vector whose elements are ones, and a = (a1, a2, . . . , an)> ∈ Rn+ and b = (b1, b2, . . . , bm)> ∈ Rm+ are the weights. The OT problem between two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj determines the optimal transport plan of the following problem: min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijc(xi,yj), (1) where c(x,y) is a cost function. For example, the squared Euclidean distance is used, that is, c(x,y) = ‖x − y‖22. To solve the OT problem, Eq. (1) (also known as the earth mover’s distance) using linear programming requires O(n3), (n = m) computation, which is computationally expensive. To address this, an entropic-regularized optimal transport is used (Cuturi, 2013). min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijc(xi,yj) + H(Π), where ≥ 0 is the regularization parameter, and H(Π) = ∑ni=1 ∑m j=1 πij(log(πij) − 1) is the entropic regularization. If = 0, then the regularized OT problem reduces to the EMD problem. Owing to entropic regularization, the entropic regularized OT problem can be accurately solved using Sinkhorn iteration (Cuturi, 2013) with a O(nm) computational cost (See Algorithm 1). Wasserstein distance: If the cost function is defined as c(x,y) = d(x,y) with d(x,y) as a distance function and p ≥ 1, then we define the p-Wasserstein distance of two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj as Wp(µ, ν) = min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijd(xi,yj) p 1/p . Recently, a robust variant of the Wasserstein distance, called the subspace robust Wasserstein distance (SRW), was proposed (Paty & Cuturi, 2019). The SRW computes the OT problem in the discriminative subspace. This can be determined by solving dimensionality-reduction problems. Owing to the robustness, it can compute the Wasserstein from noisy data. The SRW is given as SRW(µ, ν) = min Π∈U(µ,ν) max U∈Rd×k,U>U=Ik n∑ i=1 m∑ j=1 πij‖U>xi −U>yj‖22 1 2 , (2) where U is the projection matrix with k ≤ d, and Ik ∈ Rk×k is the identity matrix. The SRW or its relaxed problem can be efficiently estimated using either eigenvalue decomposition or the Frank–Wolfe algorithm. 3 PROPOSED METHOD This paper proposes FROT. We assume that the vectors are grouped as x = (x(1) > , . . . ,x(L) > )> and y = (y(1) > , . . . ,y(L) > )>. Here, x(`) ∈ Rd` and y(`) ∈ Rd` are the d` dimensional vectors, where ∑L `=1 d` = d. This setting is useful if we know the explicit group structure for the feature vectors a priori. In an application in L-layer neural networks, we consider x(`) and y(`) as outputs of the `th layer of the network. If we do not have a priori information, we can consider each feature independently (i.e., d1 = d2 = . . . = dL = 1 and L = d). All proofs in this section are provided in the Appendix. 3.1 FEATURE-ROBUST OPTIMAL TRANSPORT (FROT) The FROT formulation is given by min Π∈U(µ,ν) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ `=1 α`c(x (`) i ,y (`) j ), (3) where ΣL = {α ∈ RL+ : α>1L = 1} is the probability simplex. The underlying concept of FROT is to estimate the transport plan Π using distinct groups with large distances between {x(`)i }ni=1 and {y(`)j }mj=1. We note that determining the transport plan in nondistinct groups is difficult because the data samples in {x(`)i }ni=1 and {y (`) j }mj=1 overlap. By contrast, in distinct groups, {x (`) i }ni=1 and {y(`)j }mj=1 are different, and this aids in determining an optimal transport plan. This is an intrinsically similar idea to the subspace robust Wasserstein distance (Paty & Cuturi, 2019), which estimates the transport plan in the discriminative subspace, while our approach selects important groups. Therefore, FROT can be regarded as a feature selection variant of the vanilla OT problem in Eq. (1), whereas the subspace robust version uses dimensionality-reduction counterparts. Algorithm 1 Sinkhorn algorithm. 1: Input: a, b,C, , tmax 2: Initialize K = e−C/ ,u = 1n,v = 1m, t = 0 3: while t ≤ tmax and not converge do 4: u = a/(Kv) 5: v = b/(K>u) 6: t = t+ 1 7: end while 8: return Π = diag(u)Kdiag(v) Algorithm 2 FROT with the Frank–Wolfe. 1: Input: {xi}ni=1, {yj}mj=1, η, and . 2: Initialize Π, compute {C`}L`=1. 3: for t = 0 . . . T do 4: Π̂ = argminΠ∈U(µ,ν)〈Π,MΠ(t)〉 + H(Π) 5: Π(t+1) = (1− γ)Π(t) + γΠ̂ 6: with γ = 22+t . 7: end for 8: return Π(T ) Using FROT, we can define a p-feature robust Wasserstein distance (p-FRWD). Proposition 1 For the distance function d(x,y), FRWDp(µ, ν) = min Π∈U(µ,ν) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ `=1 α`d(x (`) i ,y (`) j ) p 1/p , (4) is a distance for p ≥ 1. Note that we can show that 2-FRWD is a special case of SRW with d(x,y) = ‖x − y‖2 (See Appendix). The key difference between SRW and FRWD is that FRWD can use any distance, while SRW can only use d(x,y) = ‖x− y‖2. 3.2 FROT OPTIMIZATION Here, we propose two FROT algorithms based on the Frank–Wolfe algorithm and linear programming. Frank–Wolfe: We propose a continuous variant of the FROT algorithm using the Frank–Wolfe algorithm, which can be fully differentiable. To this end, we introduce entropic regularization for α and rewrite the FROT as a function of Π. Therefore, we solve the following problem for α: min Π∈U(µ,ν) max α∈ΣL Jη(Π,α),with Jη(Π,α) = n∑ i=1 m∑ j=1 πij L∑ `=1 α`c(x (`) i ,y (`) j )− ηH(α), where η ≥ 0 is the regularization parameter, and H(α) = ∑L`=1 α`(log(α`) − 1) is the entropic regularization for α. An advantage of entropic regularization is that the nonnegative constraint is naturally satisfied, and the entropic regularizer is a strong convex function. Lemma 2 The optimal solution of the optimization problem α∗ = argmax α∈ΣL Jη(Π,α), with Jη(Π,α) = L∑ `=1 α`φ` − ηH(α) with a fixed admissible transport plan Π ∈ U(µ, ν), is given by α∗` = exp ( 1 ηφ` ) ∑L `′=1 exp ( 1 ηφ`′ ) with Jη(Π,α∗) = η log ( L∑ `=1 exp ( 1 η φ` )) + η. Using Lemma 2 (or Lemma 4 in Nesterov (2005)) together with the setting φ` =∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) i ) = 〈Π,C`〉, [C`]ij = c(x (`) i ,y (`) i ), the global problem is equivalent to min Π∈U(µ,ν) Gη(Π), with Gη(Π) = η log ( L∑ `=1 exp ( 1 η 〈Π,C`〉 )) . (5) Note that this is known as a smoothed max-operator (Nesterov, 2005; Blondel et al., 2018). Specifically, regularization parameter η controls the “smoothness” of the maximum. Proposition 3 Gη(Π) is a convex function relative to Π. The derived optimization problem of FROT is convex. Therefore, we can determine globally optimal solutions. Note that the SRW optimization problem is not jointly convex (Paty & Cuturi, 2019) for the projection matrix and the transport plan. In this study, we employ the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013), using which we approximate Gη(Π) with linear functions at Π(t) and move Π toward the optimal solution in the convex set (See Algorithm 2). The derivative of the loss function Gη(Π) at Π(t) is given by ∂Gη(Π) ∂Π ∣∣∣∣ Π=Π(t) = L∑ `=1 α (t) ` C` =MΠ(t) with α (t) ` = exp ( 1 η 〈Π(t),C`〉 ) ∑L `′=1 exp ( 1 η 〈Π(t),C`′〉 ) . Then, we update the transport plan by solving the EMD problem: Π(t+1) = (1− γ)Π(t) + γΠ̂ with Π̂ = argmin Π∈U(µ,ν) 〈Π,MΠ(t)〉, where γ = 2/(2+ k). Note thatMΠ(t) is given by the weighted sum of the cost matrices. Thus, we can utilize multiple features to estimate the transport plan Π for the relaxed problem in Eq. (5). Using the Frank–Wolfe algorithm, we can obtain the optimal solution. However, solving the EMD problem requires a cubic computational cost that can be expensive if n and m are large. To address this, we can solve the regularized OT problem, which requires O(nm). We denote the Frank–Wolfe algorithm with EMD as FW-EMD and the Frank–Wolfe algorithm with Sinkhorn as FW-Sinkhorn. Computational complexity: The proposed method depends on the Sinkhorn algorithm, which requires an O(nm) operation. The computation of the cost matrix in each subproblem needs an O(Lnm) operation, where L is the number of groups. Therefore, the entire complexity is O(TLnm), where T is the number of Frank–Wolfe iterations (in general, T = 10 is sufficient). Proposition 4 For each t ≥ 1, the iteration Π(t) of Algorithm 2 satisfies Gη(Π (t))−Gη(Π∗) ≤ 4σmax(Φ >Φ) η(t+ 2) (1 + δ), where σmax(Φ>Φ) is the largest eigenvalue of the matrix Φ>Φ and Φ = (vec(C1), vec(C2), . . . , vec(CL))>; and δ ≥ 0 is the accuracy to which internal linear subproblems are solved. Based on Proposition 4, the number of iterations depends on η, , and the number of groups. If we set a small η, convergence requires more time. In addition, if we use entropic regularization with a large , the δ in Proposition 4 can be large. Finally, if we use more groups, the largest eigenvalue of the matrix Φ>Φ can be larger. Note that the constant term of the upper bound is large; however, the Frank–Wolfe algorithm converges quickly in practice. Linear Programming: Because limη→0+ Gη(Π) = max`∈{1,2,...,L} ∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) j ), the FROT problem can also be written as min Π∈U(µ,ν) max `∈{1,2,...,L} n∑ i=1 m∑ j=1 πijc(x (`) i ,y (`) j ). (6) Because the objective is the max of linear functions, it is convex with respect to Π. We can solve the problem via linear programming: min Π∈U(µ,ν),t t, s.t. 〈Π,C`〉 ≤ t, ` = 1, 2, . . . , L. (7) This optimization can be easily solved using an off-the-shelf LP package. However, the computational cost of this LP problem is high in general (i.e., O(n3), n = m). 3.3 APPLICATION: SEMANTIC CORRESPONDENCE We applied our proposed FROT algorithm to semantic correspondence. The semantic correspondence is a problem that determines the matching of objects in two images. That is, given input image pairs (A,B), with common objects, we formulated the semantic correspondence problem to estimate the transport plan from the key points in A to those in B; this framework was proposed in (Liu et al., 2020). In Figure 2, we show an overview of our proposed framework. Cost matrix computation C`: In our framework, we employed a pretrained convolutional neural network to extract dense feature maps for each convolutional layer. The dense feature map of the `th layer output of the sth image is given by f (`,s) s,q+(r−1)hs ∈ R d` , q = 1, 2, . . . , hs, r = 1, 2, . . . , ws, ` = 1, 2, . . . , L, where ws and hs are the width and height of the sth image, respectively, and d` is the dimension of the `th layer’s feature map. Note that because the dimension of the dense feature map is different for each layer, we sample feature maps to the size of the 1st layer’s feature map size (i.e., hs ×ws). The `th layer’s cost matrix for images s and s′ is given by [C`]ij = ‖f (`,s)i − f (`,s′) j ‖22, i = 1, 2, . . . , wshs, j = 1, 2, . . . , ws′hs′ . Source image Target image CAM Feature Robust Optimal Transport (FROT)x (`) <latexit sha1_base64="L3BU/IT3q3uES8MG5EgG5Qq4+XQ=">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yeXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/Bx/eYmM=</latexit> y(`) <latexit sha1_base64="YQVQlZDTAKyi4yW8oPOIIFbJkMw=">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yfXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/B6rAYmQ=</latexit> C`<latexit sha1_base64="OacnRizJRBX89wDWgRojPha2Xaw=">AACLWnicvZ1Lb2NJFYBrhtc8gOlhWCDBIkzT0IOalrsBDRpAmjyd7nR3nDjv8XTkx43jaef6jh+J7Yw3LOkfwIIVSEggVvAX2PAHWIzEH0AsB4kNC+px69Z9VZ1zfN2TKIlTPuc7p6pOPW7VveVG0O0MhqXSpy+9/IUvfunLX3nl1dde/+rXvv7GjTe/cTDojfpNb7/Z6/b6R436wOt2fG9/2Bl2vaOg79UvGl3vsPFsVbx/eOn1B52evzecBN6HF/W23znrNOtDnnR64zu1oTceSs5132vNrmuNXre1elrzut3Z6Y2bpbsl+bWUfXEvfHGThV+V3pvrb7Eaa7Eea7IRu2Ae89mQv+6yOhvw7w/YPVZiAU/7kF3ztD5/1ZHve2zGXuO6Iy7lcYk6T33Gf7f5fx+EqT7/XzAHUrvJrXT5T59rLrFbpX+U/lT6rPT30p9L/yr9z8q6lgzhy4T/bShdLzh949ffqv4X1Lrgf4fs3Gg5Nbr8lfCvI3UuZE7v5KQLmivvQ3bGfibz3OFlEMgUURpN5cfl9DefVd/bvXX9/dLvS//m5fC70qelv/GS8C//0/zDjrf7W04XfJ9rXUmbnrTvcdvXvLY8zu9w1jVbC1/70rcOl/LDesnXDfirHv89435qjvCzEqb3eI5wpK78e5EhPQrT605t/X9aey8h5yI05essYTVM78oY7vOocVE8NpaRkqash+lBLNLzCWey9ocZwkaUbtfVkZXWXc6JuNfYLUdZiLx2cvKxGntHEJYihig/VU8+17jm6ZqqYqTHGTPZ/j+U7/o8pSNlVR8h0hpcZondDqOnJ/9Tv5P1uMRucs47/O8M4YeO0UX4kR/XNH90pC/CH9M64j7U+Pc8fvgL9wT2Qre7RZRGtq3mlQoUKcqHPOs12e+O+W/hw3XCh3dkX4zlezJNWTmXrUv4/D3+3wp/fyxfXfIYU2OBGE3uh70jXKI7vLdZC9l9PlZ0pf67ctSdxV7NEqNCljPgfYYfcvS415f9g37HnVvdb/iyx1PEYTi2iXbczSErnZmU/yVoQdRxW46raT9FPGXpcfkaH4lnaAt1XicUC0IeZ6EvXz9LRH2SaSRUmz3j792WcV2TPXqbSw5l/LnsDOU8yWZDvXs7bC1QvXbk3MROMxJFPB7IempaIlC/V2M/Bzgj2Y774WimPf4k9OcTtPYgoW/SryVpxp7yNupmCZ/Fe6om7oekrpRoy5JS/dUd/vt+FBvtzJwhj9yJ+hO4lXXYXfnT4j+zSKsT9S/uHLTkHDW/NQQydwF/9d3Yj06F+5yAvxpJdgD2pa2wXwnYDwDZSkStgJKaWgGpgexJVL/wMTuVvUWXp56n5qpZTV9eGwWyt6iHkXUdzo9F7/75fP+Uf7trQ9dug3+vpmpcpF3LVHde4/LruYx1EuNJLuMJibGby9glMaq5jCoixhtyhtJiU9kqepITT9dXpj3ZdkuIMURr9qKx1s67F/EwxGWAtkzwbgVgrRBYqwBrlcBaA1hrBNY6wFonsDYA1gaBVQZYZQJrE2BtElgPANYDAushwHpIYG0BrC0C6xHAekRgPQZYjwmsJwDrCYG1DbC2CawKwKoQWDsAa4fA2gVYuwRWFWBVCaw9gLVHYO0DrH0C6wBgHRBYhwDrkMA6AlhHBNYxwDomsE4A1glp5K4DtDrBswbAahBYTYDVJLBaAKtFYEHzJo/AOgNYZwRWG2C1CaxzgHVOYHUAVofA+ghgfURgPQNYzwisLsDqElgXAOuCwPIBlk9gQdcfPQIrAFgBgfUxwPqYwOoDrD6BNQBYAwJrCLCGBNYIYI0IrEuAdUlgXQGsKwJrDLDGBNYEYE0IrCnAmpJG7mm4GptMe8riqw3xVVu8l8NoJR7iejGPcbMNs/LlKonkGhl+/uHJNU2IbeTwo2ud6d1lNzsuiZ+fiPVXjOdxSfyMJZCrtuIeCGhEqGWk8VGDK/spuexxXCpV7TpjyHFJ/Kynh2IbOfy8pS73qGF2XBI/k6lLBnxdUEvJ4uc30AhUC2XwsxyY6JOIY3DWWgtl8DMemBiQiH25mwIxtRR+1tIJ9yUhclwS3+bqiLrSUvgZDaV/S0vja/AcVYfnpFpsoqhNIlXlEPZ1QKKKdE+ObxA5Lomll9EjbFwSS19Dj7BxSfzKGnYs2ZtjLHlE6JOTsviVLjhajkixUkEQKyRiFd0zVefomfZJfUhaGl8qmNZeIbb2Cqq1V4itfRvd2uOS+CvHurzrgFLq+To0i/hZX1aeZgkz4sclaXTc6J+UpVnAzwSy8tSSwrSKpKyysOS00pTXkGZnWl+7qnTcnrSSXbEyMHvRSnbVysDsQSvZNSsDs/esZNetDMyes5LdsDIwe81KtmxlYPaYleymlYHZW1ayD6wMzJ6ykn1oZWD2kpXslpWB2UNWso+sDMzesZJ9bGVg9oyV7BMrA7NXrGS3rQzMHrGSrVgZmL1hJbtjZWD2hJXsrpWB2QtWslUrA7MHrGT3rAzM3q+S3bcyMHu+SvbAysDs9SrZQysDs8erZI+sDMzerpI9tjIwe7pK9sTKwO3l6twMYzsYeTlyrw3jcuuycbwQG0fR+jMtHx4xH3Yb9nzEbbismHmj2Uu9kvfeenJNe5iSgv02ssMMcSjnw+IVlZpdDXX5m7d2SrcA+V/EypSQC/xKtJ5Juqhmton1dR3lJeVOwDKKSLmHbx9FxPT8rtxi8viRQ5+y798gRAh9n6hBjPG0BZeNeF9k7mhJ9lCtGOnzuj9cfbt9V6uHdflcgH5CxozBWM1JQvOYoDlNaJ4Amubp2Vb4tIMqWZeOWJkQa9FN+VdrXZC1zkNPs1GZZ8FN92VsDcJ7XSBffGaeU2gSS0jXaTr9NCdGMbvvac6EwMfsyKc5UwIft0vflzpify++kiGs9MGY0JozJwnHcMVTPhdLvuA/aj/a5edp2N+Zfg5qD11HCeTR3H2AeIJSPYM7ydQE9OyQOptAPBmp7F8yL8OAnhRKejADvHrKarLFDklUvQPqIudFsp7PzmdV3wtJtdoqbHWYiunk2Oqq80X4kxcVpiygmHlRHpgYmMcDaiTYysDe28BtyVUylAix++CKDIqF+eOP2qrny4urhqltbXEexOvyFtoHM8oU7eXmtVu0n6OUeHpMdZWxixrI56d70f2Bao/JrdGT7d5Lac7rgcdbeCfUbiJKEBrJx9HomZ5NTsAynlp0pwhdPbPN35cag/oTp762Dz3trFfd4Ked9boW/LRzEJVpYJmn30HNpYOofPM5UzRn7PQG54nLDwzBlRPKvb5jlr07d0zQTd+BOwbr01gcF7gL2Ni2UTC92iTyZVLAl0nki42C8SV5x4CplWT6/F6m+R6Bj/Hf+OwjJL1QEooWJTsKpV3jJo4UIEi0ORY8AuXb6pNmsmmr84311Pmz2as5KhB5ZqfERqE8jXbEsj1WEe/i3HRvtgh/Jzn+Fult4ty0v0X6n7KMJMEroyTtM25NguZKZswwc2aTVrzkx7E5McSllPwYuLKKj6PYMhjllMFoAWUwyikDG5dSBpc5/l4uwN/LHH9tXIq/qnztMWvkoD1VfKkXsYbJm4mZIpFiIgSOC/f4qjm20RWTJz/KU5ExWoyI8fMpzWlELh1zgpHQaIA2jLzSgEZTcYWpT47F7F74DFpFFLmCyto9qxBa6t7JNXAGomR1/awViDnN0hFjY+FyoVuTfS8jLoNrwem1fbcN7HwqS0mupGByQ7WQ7U9dOyDFZqlZC9lRgmKdNlvNWoB74EWW8rk1p+cFesT8MqXbooxfyXylV/tcUbzofBa3Tcm3eqYh22JUevG8KU62Tbj4NP8HFv8HC/J/YPHfzo8/e223sCL3VNthWt4q6AriRLsz1gr3bPrytPH4mm6Stsyq0Wl/0Am0Z9Z+5Azh0Uh6Yc4MF1rYHi2tnWwNZ0RtKBfu9TYdVcVW7aAZzbjQjNhcSRa5fpxGeV3EOQVwnpNXjIuw6Mt7Q4pZxK1Zxmdx+v8ia5TxGaabR+kXk2uc2f5xUWusdnu42kg/Nbao1d54Db2I1eR4jS1mNTnrv30u9yJLzV1vi7Ns5jV58/VFcPNm4vPXz3Lk53IB75Yjr2wUjC9HkS/FVqmh+oafTXDzdV4XswqejZPFrILje6ti5WEo2chcRAmVc0qovAB/yzn+2rgUf5dz/C3Stgwj62+R1qYZW9LfrM9bjuuMOwXzomxm80O1Sc8n3h6ei88Hbb8p6+9i9pvwPcOkUM+QjKwXEU/JOHoR0bOF8h+imLosdn8CVHPF6svc2Vtk/9F+RyR895P7LkR77jB+6ec/4vfOqpT5S0w/GZJmHhdmnmSYJ4WZ8RHanXd8eR5nmLa845knGaYt7xSmu8czT+pAKygVps8droRe6RUg+N6VClN7NBX0ylEQWQvI1vSpbQHamh9aw9zPU2eYe4R0nDVScQfxtVw9pYe1hqntI0QZGj+wROzaji4R/B13RrKe0cVpbvK2o/uV/GtwI4mfUZyjqeeouI1z46ftQVzsvB/rLeXJ1Taa2iaVQRtdBpgTt32m1+R74ep6l6lPQtOnM+N2aPcY5nlWygn/bRSRcqr4JopIjfPFxuIKiog59ceVW0weXXmj5GiTYZ9FppU8lkrxdQ1NpXzWziGaSvt0CCwVc5YI5CPGM6j0MGW2AzAwZ97sOCIXow9FLCZOodrB1MlD63XPQ4Sua3YC6x9YbR+AuhtW3Q1Qt2zVLYO6W+H5U+mrgq3w9ClI174PgSW4Z4QYipm/dRz+xKWwRHOeOcxNnn1Oo+PWkfAWMH7jaeaUQNc4SzlLMH4KtYuZd1o1pmSrmeuCau61L+WzbarsCsWkfObBYcbPw8J+Hmb8zGdi/NT+ze+V9mZ+H0xpd5G12iXWaxdZs11i3Wb9zS8Fmr+HOf7auLg67jprGeed8aqIL1c8XX+C8CyMD6y8rae7ivoN95PE4o6r9H3SyVOxXZ7ka5vTqly6q2ydzxC2w9PxzOeQq5JblbOH00z6IViaFYBcKcDW5G2Sz9sEn/PJdp9h9oazNDYKlLObXCnAFuQVp9cr1vLAeu7iVwpaWObztweyTvPo8XchT92kSobloiXvHHwM2C6HfclyNFfWz3yJnu62TH8HzIGmPI4+XThJEel4ykjm1ObROtEnF8tP0dzrH4LWZnp9LN0Xt2UcQavdQl99zrl6UrAtdZ6yHyJmvdPItpnfnrIS14asDuSnTKQ/+1xFl3kP8tzYjz8z4a4DkeaHn8tlLMdPVlY1kfZKPDFjakWfHbfE66oXai/xlitKscN/N+S4NGDvyRO4taxuCSJVnfIiWpV5f4//f8R+JC3rkyaE7NsyRax6vh2TXpe5eu/0xs17d0vya8n+4uD+3Xs/vnt/5yc33/8FU1+vsG9z3m12j73L3mebvFXv85z8iv2R/YX99df/fP7y81efv65EX34p1HmLJb6ef/P/i9dl0A==</latexit> a <latexit sha1_base64="MoFVAJMAn0YI/DAGDrQZ+yOSN10=">AACLQnicvZ1JcyNJFYBzhm1olplhLkRwMdM09BBNh7qBGKIJIsar3O3utmx5H087JLksq1su1WixJXn0B/rEjQBOEMGB4GfAgT/AYX4CcCKGCC5zIJfKytoy33uq6rHDtpx673svM18ulVmVagbdzmBYqXz62utf+vJXvvq1N75+4xvf/Na333zr7e/sDXqjfsvbbfW6vf5BszHwuh3f2x12hl3vIOh7jYtm19tvvlgW7+9fev1Bp+fvDCeB99FFo+13zjqtxpAn7R03e93TxslbNyt3K/JrIfviXvjiJgu/ar23V99hx+yU9ViLjdgF85jPhvx1lzXYgH9/yO6xCgt42kfsmqf1+auOfN9jM3aD6464lMclGjz1Bf/d5v99GKb6/H/BHEjtFrfS5T99rrnAblX+Uflz5bPK3yt/qfyz8rmVdS0ZwpcJ/9tUul5w8ubL79b/B2pd8L9Ddm60nBpd/kr415E6FzKnd3LSBc2V9yE7Y7+Qee7wMghkiiiNlvLjcvqbz+oPtm9d/7Dyx8q/eTn8ofJp5a+8JPzL/7b+tOVt/57TBd/nWlfSpifte9z2Na8tj/M7nHXNVsLXvvStw6X8sF7ydQP+qsd/z7ifmiP8rIXpPZ4jHKkr/15kSI/D9IZTW/+f1t5JyLkILfk6S1gO07syhvs8alwUj41lpKQpq2F6EIv0fMKZrP1hhrAWpdt1dWSldRdzIu4Gu+UoC5HXTk4+lmPvCMJCxBDlp+rJ5xrXPF1TVYz0OGMm2/9H8l2fp3SkrOojRFqTyyyw22H09OR/6neyHhfYTc55j/+dIfzQMVqGH/lxTfNHR3oZ/pjWEffhmH/P44dfuiewF7rdlVEa2baaVypQpCgf8qwfy353zH8LH64TPrwn+2Is35Npysq5bF3C5x/w/5b4+2P56pLHmBoLxGhyP+wd4RLd4r3NSsju87GiK/Xfl6PuLPZqlhgVspwB7zP8kKPHvb7sH/Q77tzqfsOXPZ4iDsOxTbTjbg5Z6cyk/K
1. What is the focus and contribution of the paper on optimal transport? 2. What are the strengths and weaknesses of the proposed framework, FROT, particularly in terms of feature selection and robustness to noise? 3. Do you have any concerns regarding the novelty and extensiveness of the experiments in the paper? 4. How does the reviewer assess the clarity and thoroughness of the discussion on the pros and cons of feature selection vs feature dimensionality reduction? 5. What are the suggestions for improving the paper, such as exploring joint optimization of feature generation and selection, comparing with additional baselines, and including variations of the metric across trials?
Review
Review The proposed framework FROT - feature-robust optimal transport - seeks to select feature groups to both speed up OT computation for high-dimensional data and make it more robust to noise. The exposition is generally clear. My main concerns are limited novelty and lack of extensive experiments. The paper draws the contrast between prior work SRW that yields a discriminative subspace via dimensionality reduction, and offers a dual perspective to use feature selection instead. A thorough discussion on the pros and cons of feature selection vs feature dimensionality reduction would add insight. Traditional entropy-regularized OT regularizes using the entropy of the transport plan Π , whereas FROT regularizes using the probability distribution α . One expects a discussion on the effect of this choice. Currently, the optimization for the group selection is done independently of optimization that produces the features, for instance by the choice of a pretrained network in the semantic correspondence application. It's worthwhile to explore joint optimization of feature generation and selection for downstream tasks. For the claim of robustness to noise, experiments on data of dimension higher than 10 would be desirable. There should be more extensive experiments applying FROT to more tasks and compared with additional baselines. Figure 3 compares the objective scores of FW-EMD, FW-Sinkhorn with that of exact OT, there are many other OT algorithms, such as tree-based methods, as referenced in the paper, that can be compared with. These plots should also include variations of the metric across trials. Similarly for the semantic correspondence results in Table 1. Thank you authors for your response.
ICLR
Title Feature-Robust Optimal Transport for High-Dimensional Data Abstract Optimal transport is a machine learning problem with applications including distribution comparison, feature selection, and generative adversarial networks. In this paper, we propose feature-robust optimal transport (FROT) for highdimensional data, which solves high-dimensional OT problems using feature selection to avoid the curse of dimensionality. Specifically, we find a transport plan with discriminative features. To this end, we formulate the FROT problem as a min–max optimization problem. We then propose a convex formulation of the FROT problem and solve it using a Frank–Wolfe-based optimization algorithm, whereby the subproblem can be efficiently solved using the Sinkhorn algorithm. Since FROT finds the transport plan from selected features, it is robust to noise features. To show the effectiveness of FROT, we propose using the FROT algorithm for the layer selection problem in deep neural networks for semantic correspondence. By conducting synthetic and benchmark experiments, we demonstrate that the proposed method can find a strong correspondence by determining important layers. We show that the FROT algorithm achieves state-of-the-art performance in real-world semantic correspondence datasets. 1 INTRODUCTION Optimal transport (OT) is a machine learning problem with several applications in the computer vision and natural language processing communities. The applications include Wasserstein distance estimation (Peyré et al., 2019), domain adaptation (Yan et al., 2018), multitask learning (Janati et al., 2019), barycenter estimation (Cuturi & Doucet, 2014), semantic correspondence (Liu et al., 2020), feature matching (Sarlin et al., 2019), and photo album summarization (Liu et al., 2019). The OT problem is extensively studied in the computer vision community as the earth mover’s distance (EMD) (Rubner et al., 2000). However, the computational cost of EMD is cubic and highly expensive. Recently, the entropic regularized EMD problem was proposed; this problem can be solved using the Sinkhorn algorithm with a quadratic cost (Cuturi, 2013). Owing to the development of the Sinkhorn algorithm, researchers have replaced the EMD computation with its regularized counterparts. However, the optimal transport problem for high-dimensional data has remained unsolved for many years. Recently, a robust variant of the OT was proposed for high-dimensional OT problems and used for divergence estimation (Paty & Cuturi, 2019; 2020). In the robust OT framework, the transport plan is computed with the discriminative subspace of the two data matrices X ∈ Rd×n and Y ∈ Rd×m. The subspace can be obtained using dimensionality reduction. An advantage of the subspace robust approach is that it does not require prior information about the subspace. However, given prior information such as feature groups, we can consider a computationally efficient formulation. The computation of the subspace can be expensive if the dimensionality of data is high, for example, 104. One of the most common prior information items is a feature group. The use of group features is popular in feature selection problems in the biomedical domain and has been extensively studied in Group Lasso (Yuan & Lin, 2006). The key idea of Group Lasso is to prespecify the group variables and select the set of group variables using the group norm (also known as the sum of `2 norms). For example, if we use a pretrained neural network as a feature extractor and compute OT using the features, then we require careful selection of important layers to compute OT. Specifically, each layer output is regarded as a grouped input. Therefore, using a feature group as prior information is a natural setup and is important for considering OT for deep neural networks (DNNs). In this paper, we propose a high-dimensional optimal transport method by utilizing prior information in the form of grouped features. Specifically, we propose a feature-robust optimal transport (FROT) problem, for which we select distinct group feature sets to estimate a transport plan instead of determining its distinct subsets, as proposed in (Paty & Cuturi, 2019; 2020). To this end, we formulate the FROT problem as a min–max optimization problem and transform it into a convex optimization problem, which can be accurately solved using the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013). The FROT’s subproblem can be efficiently solved using the Sinkhorn algorithm (Cuturi, 2013). An advantage of FROT is that it can yield a transport plan from high-dimensional data using feature selection, using which the significance of the features is obtained without any additional cost. Therefore, the FROT formulation is highly suited for high-dimensional OT problems. Through synthetic experiments, we initially demonstrate that the proposed FROT is robust to noise dimensions (See Figure 1). Furthermore, we apply FROT to a semantic correspondence problem (Liu et al., 2020) and show that the proposed algorithm achieves SOTA performance. Contribution: • We propose a feature robust optimal transport (FROT) problem and derive a simple and efficient Frank–Wolfe based algorithm. Furthermore, we propose a feature-robust Wasserstein distance (FRWD). • We apply FROT to a high-dimensional feature selection problem and show that FROT is consistent with the Wasserstein distance-based feature selection algorithm with less computational cost than the original algorithm. • We used FROT for the layer selection problem in a semantic correspondence problem and showed that the proposed algorithm outperforms existing baseline algorithms. 2 BACKGROUND In this section, we briefly introduce the OT problem. Optimal transport (OT): The following are given: independent and identically distributed (i.i.d.) samples X = {xi}ni=1 ∈ Rd×n from a d-dimensional distribution p, and i.i.d. samples Y = {yj}mj=1 ∈ Rd×m from the d-dimensional distribution q. In the Kantorovich relaxation of OT, admissible couplings are defined by the set of the transport plan: U(µ, ν) = {Π ∈ Rn×m+ : Π1m = a,Π>1n = b}, where Π ∈ Rn×m+ is called the transport plan, 1n is the n-dimensional vector whose elements are ones, and a = (a1, a2, . . . , an)> ∈ Rn+ and b = (b1, b2, . . . , bm)> ∈ Rm+ are the weights. The OT problem between two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj determines the optimal transport plan of the following problem: min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijc(xi,yj), (1) where c(x,y) is a cost function. For example, the squared Euclidean distance is used, that is, c(x,y) = ‖x − y‖22. To solve the OT problem, Eq. (1) (also known as the earth mover’s distance) using linear programming requires O(n3), (n = m) computation, which is computationally expensive. To address this, an entropic-regularized optimal transport is used (Cuturi, 2013). min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijc(xi,yj) + H(Π), where ≥ 0 is the regularization parameter, and H(Π) = ∑ni=1 ∑m j=1 πij(log(πij) − 1) is the entropic regularization. If = 0, then the regularized OT problem reduces to the EMD problem. Owing to entropic regularization, the entropic regularized OT problem can be accurately solved using Sinkhorn iteration (Cuturi, 2013) with a O(nm) computational cost (See Algorithm 1). Wasserstein distance: If the cost function is defined as c(x,y) = d(x,y) with d(x,y) as a distance function and p ≥ 1, then we define the p-Wasserstein distance of two discrete measures µ = ∑n i=1 aiδxi and ν = ∑m j=1 bjδyj as Wp(µ, ν) = min Π∈U(µ,ν) n∑ i=1 m∑ j=1 πijd(xi,yj) p 1/p . Recently, a robust variant of the Wasserstein distance, called the subspace robust Wasserstein distance (SRW), was proposed (Paty & Cuturi, 2019). The SRW computes the OT problem in the discriminative subspace. This can be determined by solving dimensionality-reduction problems. Owing to the robustness, it can compute the Wasserstein from noisy data. The SRW is given as SRW(µ, ν) = min Π∈U(µ,ν) max U∈Rd×k,U>U=Ik n∑ i=1 m∑ j=1 πij‖U>xi −U>yj‖22 1 2 , (2) where U is the projection matrix with k ≤ d, and Ik ∈ Rk×k is the identity matrix. The SRW or its relaxed problem can be efficiently estimated using either eigenvalue decomposition or the Frank–Wolfe algorithm. 3 PROPOSED METHOD This paper proposes FROT. We assume that the vectors are grouped as x = (x(1) > , . . . ,x(L) > )> and y = (y(1) > , . . . ,y(L) > )>. Here, x(`) ∈ Rd` and y(`) ∈ Rd` are the d` dimensional vectors, where ∑L `=1 d` = d. This setting is useful if we know the explicit group structure for the feature vectors a priori. In an application in L-layer neural networks, we consider x(`) and y(`) as outputs of the `th layer of the network. If we do not have a priori information, we can consider each feature independently (i.e., d1 = d2 = . . . = dL = 1 and L = d). All proofs in this section are provided in the Appendix. 3.1 FEATURE-ROBUST OPTIMAL TRANSPORT (FROT) The FROT formulation is given by min Π∈U(µ,ν) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ `=1 α`c(x (`) i ,y (`) j ), (3) where ΣL = {α ∈ RL+ : α>1L = 1} is the probability simplex. The underlying concept of FROT is to estimate the transport plan Π using distinct groups with large distances between {x(`)i }ni=1 and {y(`)j }mj=1. We note that determining the transport plan in nondistinct groups is difficult because the data samples in {x(`)i }ni=1 and {y (`) j }mj=1 overlap. By contrast, in distinct groups, {x (`) i }ni=1 and {y(`)j }mj=1 are different, and this aids in determining an optimal transport plan. This is an intrinsically similar idea to the subspace robust Wasserstein distance (Paty & Cuturi, 2019), which estimates the transport plan in the discriminative subspace, while our approach selects important groups. Therefore, FROT can be regarded as a feature selection variant of the vanilla OT problem in Eq. (1), whereas the subspace robust version uses dimensionality-reduction counterparts. Algorithm 1 Sinkhorn algorithm. 1: Input: a, b,C, , tmax 2: Initialize K = e−C/ ,u = 1n,v = 1m, t = 0 3: while t ≤ tmax and not converge do 4: u = a/(Kv) 5: v = b/(K>u) 6: t = t+ 1 7: end while 8: return Π = diag(u)Kdiag(v) Algorithm 2 FROT with the Frank–Wolfe. 1: Input: {xi}ni=1, {yj}mj=1, η, and . 2: Initialize Π, compute {C`}L`=1. 3: for t = 0 . . . T do 4: Π̂ = argminΠ∈U(µ,ν)〈Π,MΠ(t)〉 + H(Π) 5: Π(t+1) = (1− γ)Π(t) + γΠ̂ 6: with γ = 22+t . 7: end for 8: return Π(T ) Using FROT, we can define a p-feature robust Wasserstein distance (p-FRWD). Proposition 1 For the distance function d(x,y), FRWDp(µ, ν) = min Π∈U(µ,ν) max α∈ΣL n∑ i=1 m∑ j=1 πij L∑ `=1 α`d(x (`) i ,y (`) j ) p 1/p , (4) is a distance for p ≥ 1. Note that we can show that 2-FRWD is a special case of SRW with d(x,y) = ‖x − y‖2 (See Appendix). The key difference between SRW and FRWD is that FRWD can use any distance, while SRW can only use d(x,y) = ‖x− y‖2. 3.2 FROT OPTIMIZATION Here, we propose two FROT algorithms based on the Frank–Wolfe algorithm and linear programming. Frank–Wolfe: We propose a continuous variant of the FROT algorithm using the Frank–Wolfe algorithm, which can be fully differentiable. To this end, we introduce entropic regularization for α and rewrite the FROT as a function of Π. Therefore, we solve the following problem for α: min Π∈U(µ,ν) max α∈ΣL Jη(Π,α),with Jη(Π,α) = n∑ i=1 m∑ j=1 πij L∑ `=1 α`c(x (`) i ,y (`) j )− ηH(α), where η ≥ 0 is the regularization parameter, and H(α) = ∑L`=1 α`(log(α`) − 1) is the entropic regularization for α. An advantage of entropic regularization is that the nonnegative constraint is naturally satisfied, and the entropic regularizer is a strong convex function. Lemma 2 The optimal solution of the optimization problem α∗ = argmax α∈ΣL Jη(Π,α), with Jη(Π,α) = L∑ `=1 α`φ` − ηH(α) with a fixed admissible transport plan Π ∈ U(µ, ν), is given by α∗` = exp ( 1 ηφ` ) ∑L `′=1 exp ( 1 ηφ`′ ) with Jη(Π,α∗) = η log ( L∑ `=1 exp ( 1 η φ` )) + η. Using Lemma 2 (or Lemma 4 in Nesterov (2005)) together with the setting φ` =∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) i ) = 〈Π,C`〉, [C`]ij = c(x (`) i ,y (`) i ), the global problem is equivalent to min Π∈U(µ,ν) Gη(Π), with Gη(Π) = η log ( L∑ `=1 exp ( 1 η 〈Π,C`〉 )) . (5) Note that this is known as a smoothed max-operator (Nesterov, 2005; Blondel et al., 2018). Specifically, regularization parameter η controls the “smoothness” of the maximum. Proposition 3 Gη(Π) is a convex function relative to Π. The derived optimization problem of FROT is convex. Therefore, we can determine globally optimal solutions. Note that the SRW optimization problem is not jointly convex (Paty & Cuturi, 2019) for the projection matrix and the transport plan. In this study, we employ the Frank–Wolfe algorithm (Frank & Wolfe, 1956; Jaggi, 2013), using which we approximate Gη(Π) with linear functions at Π(t) and move Π toward the optimal solution in the convex set (See Algorithm 2). The derivative of the loss function Gη(Π) at Π(t) is given by ∂Gη(Π) ∂Π ∣∣∣∣ Π=Π(t) = L∑ `=1 α (t) ` C` =MΠ(t) with α (t) ` = exp ( 1 η 〈Π(t),C`〉 ) ∑L `′=1 exp ( 1 η 〈Π(t),C`′〉 ) . Then, we update the transport plan by solving the EMD problem: Π(t+1) = (1− γ)Π(t) + γΠ̂ with Π̂ = argmin Π∈U(µ,ν) 〈Π,MΠ(t)〉, where γ = 2/(2+ k). Note thatMΠ(t) is given by the weighted sum of the cost matrices. Thus, we can utilize multiple features to estimate the transport plan Π for the relaxed problem in Eq. (5). Using the Frank–Wolfe algorithm, we can obtain the optimal solution. However, solving the EMD problem requires a cubic computational cost that can be expensive if n and m are large. To address this, we can solve the regularized OT problem, which requires O(nm). We denote the Frank–Wolfe algorithm with EMD as FW-EMD and the Frank–Wolfe algorithm with Sinkhorn as FW-Sinkhorn. Computational complexity: The proposed method depends on the Sinkhorn algorithm, which requires an O(nm) operation. The computation of the cost matrix in each subproblem needs an O(Lnm) operation, where L is the number of groups. Therefore, the entire complexity is O(TLnm), where T is the number of Frank–Wolfe iterations (in general, T = 10 is sufficient). Proposition 4 For each t ≥ 1, the iteration Π(t) of Algorithm 2 satisfies Gη(Π (t))−Gη(Π∗) ≤ 4σmax(Φ >Φ) η(t+ 2) (1 + δ), where σmax(Φ>Φ) is the largest eigenvalue of the matrix Φ>Φ and Φ = (vec(C1), vec(C2), . . . , vec(CL))>; and δ ≥ 0 is the accuracy to which internal linear subproblems are solved. Based on Proposition 4, the number of iterations depends on η, , and the number of groups. If we set a small η, convergence requires more time. In addition, if we use entropic regularization with a large , the δ in Proposition 4 can be large. Finally, if we use more groups, the largest eigenvalue of the matrix Φ>Φ can be larger. Note that the constant term of the upper bound is large; however, the Frank–Wolfe algorithm converges quickly in practice. Linear Programming: Because limη→0+ Gη(Π) = max`∈{1,2,...,L} ∑n i=1 ∑m j=1 πijc(x (`) i ,y (`) j ), the FROT problem can also be written as min Π∈U(µ,ν) max `∈{1,2,...,L} n∑ i=1 m∑ j=1 πijc(x (`) i ,y (`) j ). (6) Because the objective is the max of linear functions, it is convex with respect to Π. We can solve the problem via linear programming: min Π∈U(µ,ν),t t, s.t. 〈Π,C`〉 ≤ t, ` = 1, 2, . . . , L. (7) This optimization can be easily solved using an off-the-shelf LP package. However, the computational cost of this LP problem is high in general (i.e., O(n3), n = m). 3.3 APPLICATION: SEMANTIC CORRESPONDENCE We applied our proposed FROT algorithm to semantic correspondence. The semantic correspondence is a problem that determines the matching of objects in two images. That is, given input image pairs (A,B), with common objects, we formulated the semantic correspondence problem to estimate the transport plan from the key points in A to those in B; this framework was proposed in (Liu et al., 2020). In Figure 2, we show an overview of our proposed framework. Cost matrix computation C`: In our framework, we employed a pretrained convolutional neural network to extract dense feature maps for each convolutional layer. The dense feature map of the `th layer output of the sth image is given by f (`,s) s,q+(r−1)hs ∈ R d` , q = 1, 2, . . . , hs, r = 1, 2, . . . , ws, ` = 1, 2, . . . , L, where ws and hs are the width and height of the sth image, respectively, and d` is the dimension of the `th layer’s feature map. Note that because the dimension of the dense feature map is different for each layer, we sample feature maps to the size of the 1st layer’s feature map size (i.e., hs ×ws). The `th layer’s cost matrix for images s and s′ is given by [C`]ij = ‖f (`,s)i − f (`,s′) j ‖22, i = 1, 2, . . . , wshs, j = 1, 2, . . . , ws′hs′ . Source image Target image CAM Feature Robust Optimal Transport (FROT)x (`) <latexit sha1_base64="L3BU/IT3q3uES8MG5EgG5Qq4+XQ=">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yeXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/Bx/eYmM=</latexit> y(`) <latexit sha1_base64="YQVQlZDTAKyi4yW8oPOIIFbJkMw=">AACLTXicvZ1LcxvHEYDHzsOO87AcX1KVC2NFiZRSVJCSlFNOpcp8ghIlEST4NiwWAC5BWMvFGgBJADT+gPIDcvApqfiQys/IJbeccvBPSOXopHJxpTKPnZ19zXQ3FjJZJMFB99c9Mz2PndkdtEK/OxhWKp+/8urXvv6Nb772+rfe+PZ3vvu9N2+89f29Qe+i3/Z22z2/1z9oNQee3w283WF36HsHYd9rnrd8b7/1fFm8v3/p9QfdXrAzHIfeh+fNTtA97babQ550fOPNRqvnn4yfXd9ueL5/Z3p842blXkV+LeRf3I9e3GTRV6331urbrMFOWI+12QU7Zx4L2JC/9lmTDfj3B+w+q7CQp33Irnlan7/qyvc9NmVvcN0LLuVxiSZPfc5/d/h/H0SpAf9fMAdSu82t+PynzzUX2K3KPyp/rnxR+VvlL5V/Vr60sq4lQ/gy5n9bStcLj9988YP6f0Gtc/53yM6MllPD56+Ef12pcy5zercgXdBceR+yU/ZrmecuL4NQpojSaCs/Lie//6L+3vat659U/lj5Fy+HP1Q+r/yVl0Rw+e/2Z1ve9qecLvgB17qSNj1p3+O2r3lteZzf5axrthK9DqRvXS4VRPVSrBvyVz3+e8r91BzhZy1K7/Ec4Ui+/HueIz2O0ptObf1/VnsnJecitOXrPGE5SvdlDPd51LgoHhvJSMlSVqP0MBHpxYRTWfvDHGEtTrfr6sjK6i4WRNwb7JajLEReuwX5WE68IwgLMUOUn6qngGtc83RNVTHS44ypbP8fyncDntKVsqqPEGktLrPAbkfR05P/qd/pelxgNznnDv87RfihY3QefhTHNc0fHenz8Me0jqQPDf49ix/B3D2BvdDtbh6lkW+rRaUCRYryoch6Q/a7I/5b+HCd8uGO7IuxfE+mKStnsnUJn3/M/1vi74/kq0seY2osEKPJg6h3hEt0i/c2KxG7z8cKX+q/K0fdaeLVNDUq5DkD3mcEEUePe33ZP+h33LnV/UYgezxFHEZjm2jHfgFZ6Uyl/G9BC6KOO3Jczfop4ilPT8o3+Eg8RVto8jqhWBDyOAt9+fp5KurTTCOh2uwpf++2jOuG7NE7XHIo489lZyjnSTYb6t3bUWuB6rUr5yZ2mpEo4/FA1lPbEoH6vQb7DcC5kO24H41m2uNPIn8+QWsPUvom/VqSpuwZb6NulvBZvKdq4kFE8qVER5aU6q/u8t8P4tjo5OYMReRu3J/ArazL7smfE/4zjbW6cf/izsGJnKMWt4ZQ5i7kr36U+NGpcJ8T8lcXkh2CfelJ1K+E7KeAbC2m1kBJTa2B1FD2JKpf+Jgdy97C56lnmblqXjOQ10ah7C2aUWRdR/Nj0bt/Nd+/4t/u2tC12+Lfy5kaF2nXMtWd16T8aiFjlcR4Wsh4SmJsFzK2SYx6IaOOiPGWnKGcsIlsFT3JSabrK9OebLsVxBiiNXvxWGvn3Y95GOIiQFskeLcEsJYIrGWAtUxgrQCsFQJrFWCtElhrAGuNwKoCrCqBtQ6w1gmshwDrIYH1CGA9IrA2ANYGgfUYYD0msJ4ArCcE1lOA9ZTA2gRYmwRWDWDVCKwtgLVFYG0DrG0Cqw6w6gTWDsDaIbB2AdYugbUHsPYIrH2AtU9gHQCsAwLrEGAdElhHAOuINHI3AVqT4FkLYLUIrDbAahNYJwDrhMCC5k0egXUKsE4JrA7A6hBYZwDrjMDqAqwugfURwPqIwHoOsJ4TWD7A8gmsc4B1TmAFACsgsKDrjx6BFQKskMD6GGB9TGD1AVafwBoArAGBNQRYQwLrAmBdEFiXAOuSwLoCWFcE1ghgjQisMcAaE1gTgDUhjdyTaDU2nfaMJVcbkqu2eC+H8Uo8xPUSHuNmG2bly1US6TUy/PzDk2uaENvI4UfXJtO7y252UhI/PxHrrxjPk5L4GUsoV23FPRDQiNDISeOjBlf2E3LZ47hUqtp1xpCTkvhZTw/FNnL4eUtT7lHD7KQkfibTlAz4uqCRkcXPb6ARqBHJ4Gc5MDEgEUfgrLURyeBnPDAxJBH7cjcFYmop/KylG+1LQuSkJL7NNRF1paXwMxpK/5aVxtfgGaoOz0i12EZR20SqyiHs64BEFemeHN8gclISS6+iR9ikJJa+gh5hk5L4lTXsWLIzw1jymNAnp2XxK11wtByQYqWGINZIxDq6Z6rP0DPtkvqQrDS+VDCtvUZs7TVUa68RW/smurUnJfFXjk151wGl1It1aBbxs768PM0SZsRPStLouNE/LUuzgJ8J5OWpJYVpFWlZZWHBaaUtryHNzrS+dlXpuD1pJbtkZWD2opXsspWB2YNWsitWBmbvWcmuWhmYPWclu2ZlYPaalWzVysDsMSvZdSsDs7esZB9aGZg9ZSX7yMrA7CUr2Q0rA7OHrGQfWxmYvWMl+8TKwOwZK9mnVgZmr1jJbloZmD1iJVuzMjB7w0p2y8rA7Akr2W0rA7MXrGTrVgZmD1jJ7lgZmL1fJbtrZWD2fJXsnpWB2etVsvtWBmaPV8keWBmYvV0le2hlYPZ0leyRlYHby9W5GSZ2MIpy5F4bxuXWZeNwLjYO4vVnWj48Yj7sNuz5SNpwWTHzRrOXeiXvvfXkmvYwIwX7bWSHOeJQzofFKyo1vxrq8rdo7ZRuAfK/jJUJIRf4lWg9k3RRzWwT6+sqykvKnYBVFJFyD98uiojp+V25xeTxI4c+Zd+/RYgQ+j5RixjjWQsuG8m+yNzRku6hThKkr+r+cPXt9l2tHjblcwH6CRkzBmM1xynNQ4LmJKV5BGiap2dPoqcdVMm6dMTKhFiLbsu/WuucrHUWeZqPyiILbnogY2sQ3esC+RIw85xCm1hCuk6z6ccFMYrZfc9yxgQ+Zkc+y5kQ+Lhd+r7UEft7yZUMYaUPxoTWnDpJOIYrnoq5WPI5/1H70S4/j6P+zvRzUHvwHSVQRHP3AeIJSvUM7jhXE9CzQ+psAvFkpLJ/ybwcA3pSKO3BFPDqGWvIFjskUfUOqItcFMl6PjubVX0vJNXqSWmrw0xMp8dWV53Pw5+iqDBlAcXMy/LAxMAsHlAjwVYG9t4GbkuukqFEiN0HV2RQLMwef9RWPVteXDVMbWvz8yBZl7fQPphRpmwvN6vdsv0cpcSzY6qrjF3UUD4/3YvvD1R7TG6Nnmz3XkZzVg883sK7kXYbUYLQSD6KR8/sbHIMlvHEojtB6OqZbfG+1AjUHzv1tX3oaWe96gY/7azXteCnncO4TEPLPP0uai4dxuVbzJmgOSOnNzhPXH5gCK6cUO71HbH83bkjgm72DtwRWJ/G4qjEXcDGto2C6dXGsS/jEr6MY19sFIwv6TsGTK2k02f3Msv3CHyM/8bnACHpRZJQtCjZi0jaNW7iSCGCRJtjwSNQsa0+aSabtTrbWE+dP5u9moMSkWd2SmwUytNoByzfY5XxLsnN9mbz8Hdc4G+Z3ibJzfpbpv+pykgSvCpK0j7j1iRormTGDDNnNmnlS36UmBNDXErJj4Arq+Q4ii2Di4IyuJhDGVwUlIGNSymDywJ/L+fg72WBvzYuxV9VvvaYNXLQniq+1MtYw+TNxEyZSDERAseFe3zVHNvoislTEOepzBgtRsTk+ZTmNCKXjjnBSGi0QBtGXmlAo6m4wtQnx2J2LwIGrSKKXEFl7Z5VCC117+QKOANRsrp+VkrEnGbpiLGxcLnQrcm+l5GUwbXg7Nq+2wZ2PpWnpFdSMLmhWsj3p64dkHKz1LyF/ChBsU6breYtwD3wPEv5zJrTsxI9YnGZ0m1Rxq90vrKrfa4onnc+y9um5Fs905BvMSq9fN4UJ98mXHya/wOL/4M5+T+w+G/nJ5+9tltYknuqnSitaBV0CXGi3Sk7ifZs+vK08eSabpq2yOrxaX/QCbSn1n7kFOHRhfTCnBkutLA9WlY73RpOidpQLtzrbTqqyq3aQTOaUakZsbmSLHP9OInzOo9zCuA8p68Y52ExkPeGlLOIW7NMzuL0/2XWKJMzTDeP0i+m1zjz/eO81ljt9nC1kX1qbF6rvckaehmryckam89qct5/+1zuZZaau97mZ9nMa4rm6/PgFs3EZ6+fxdjPxRLeLcZe2SgYXw5iX8qtUkP1DT+b4ObrvM5nFTwfJ/NZBcf3VuXKw1DykTmPEqoWlFB1Dv5WC/y1cSn+Lhb4W6ZtGUbe3zKtTTM2pL95nzcc1xl3S+ZF2cznh2qTnk+8PTwXnw/aflPe3/nsN+F7hnGpniEdWS8jntJx9DKiZwPlP0QxdVnu/gSo5srVl7mzt8z+o/2OSPjuJ/ddiPbcYfzSz38k751VKbOXmH4yJMs8LM08yjGPSjOTI7Q77/jyPMwxbXnHM49yTFveKUx3j2ee1IFWUGpMnztci7zSK0DwvSs1pvZoauiVozC2FpKt6VPbQrS1ILKGuZ+nyTD3COk4a2XiDuJruWZGD2sNU9sHiDI0fmCJ2LUdXSL4O+6MZDOni9Nc521H9yvF1+BGEj+jOENTz1Bxm+QmT9uDuNh5P9ZbypOrHTS1QyqDDroMMCduB0yvyfei1XWfqU9C06cz43ZodxjmeVbKCf8dFJFyqvg6ikiN8/nG4hKKiDn1x5VbTB5deaPkaJ1hn0WmlTyWSvF1BU2lfNbOPppK+3QILBVzlgjkI8YzqPQwZbYFMDBn3mw5IhejD0UsJk6h2sHUySPrdc8jhK5rdgLr71lt74G6a1bdNVC3atWtgrob0flT2auCjej0KUjXvg+BJbhnhBiKmb91Hf4kpbBEc545zE2ffU6j49aR8BYwfuNp5pRA1zhLOUsweQq1i1l0WjWmZOu564J64bUv5bNt6uwKxaR85sF+zs/90n7u5/wsZmL81P7N7pX2ZnYfTGn7yFr1ifXqI2vWJ9Zt3t/iUqD5u1/gr42Lq2PfWcs474xXZXy54un6E4SnUXxg5W093VXcb7ifJBZ3XGXvk06fiu3ypFjbnFbl0l1mq3yGsBmdjmc+h1yV3LKcPRzn0vfB0qwB5FoJtiZvknzeJPhcTLb7DLPXnKWxVqKc3eRaCbYgLzm9XrKWB9ZzF79W0sIin789lHVaRE++C3nqJtVyLBctfefgE8B2NepLFuO5sn7mS/R0t2X6HTAHmvIk/nThNEWk4ykXMqc2j1aJPrlYQYbmXv8QtA7T62PZvrgj4wha7Rb66nPO1ZOCHanzjP0MMeudxLbN/PaYVbg2ZHUgP2Ui+9nnKrrMe5Dnxn7ymQl3HYi0IPpcLmM5ebKyqomsV+KJGVMr+uy4BV5XvUh7gbdcUYpd/rslx6UBe0+ewK1ldUsQqeqUF9GqzPs7/P8D9nNpWZ80IWTfkSli1fOdhPSqzNV7xzdu3r9XkV8L9hd7D+7d/8W9B1u/vPn+ElNfr7Mfct5tdp+9y95n67xV7zJxBs2n7E/ssxd/f/GfF1+++J8SffWVSOdtlvr63Wv/B6rAYmQ=</latexit> C`<latexit sha1_base64="OacnRizJRBX89wDWgRojPha2Xaw=">AACLWnicvZ1Lb2NJFYBrhtc8gOlhWCDBIkzT0IOalrsBDRpAmjyd7nR3nDjv8XTkx43jaef6jh+J7Yw3LOkfwIIVSEggVvAX2PAHWIzEH0AsB4kNC+px69Z9VZ1zfN2TKIlTPuc7p6pOPW7VveVG0O0MhqXSpy+9/IUvfunLX3nl1dde/+rXvv7GjTe/cTDojfpNb7/Z6/b6R436wOt2fG9/2Bl2vaOg79UvGl3vsPFsVbx/eOn1B52evzecBN6HF/W23znrNOtDnnR64zu1oTceSs5132vNrmuNXre1elrzut3Z6Y2bpbsl+bWUfXEvfHGThV+V3pvrb7Eaa7Eea7IRu2Ae89mQv+6yOhvw7w/YPVZiAU/7kF3ztD5/1ZHve2zGXuO6Iy7lcYk6T33Gf7f5fx+EqT7/XzAHUrvJrXT5T59rLrFbpX+U/lT6rPT30p9L/yr9z8q6lgzhy4T/bShdLzh949ffqv4X1Lrgf4fs3Gg5Nbr8lfCvI3UuZE7v5KQLmivvQ3bGfibz3OFlEMgUURpN5cfl9DefVd/bvXX9/dLvS//m5fC70qelv/GS8C//0/zDjrf7W04XfJ9rXUmbnrTvcdvXvLY8zu9w1jVbC1/70rcOl/LDesnXDfirHv89435qjvCzEqb3eI5wpK78e5EhPQrT605t/X9aey8h5yI05essYTVM78oY7vOocVE8NpaRkqash+lBLNLzCWey9ocZwkaUbtfVkZXWXc6JuNfYLUdZiLx2cvKxGntHEJYihig/VU8+17jm6ZqqYqTHGTPZ/j+U7/o8pSNlVR8h0hpcZondDqOnJ/9Tv5P1uMRucs47/O8M4YeO0UX4kR/XNH90pC/CH9M64j7U+Pc8fvgL9wT2Qre7RZRGtq3mlQoUKcqHPOs12e+O+W/hw3XCh3dkX4zlezJNWTmXrUv4/D3+3wp/fyxfXfIYU2OBGE3uh70jXKI7vLdZC9l9PlZ0pf67ctSdxV7NEqNCljPgfYYfcvS415f9g37HnVvdb/iyx1PEYTi2iXbczSErnZmU/yVoQdRxW46raT9FPGXpcfkaH4lnaAt1XicUC0IeZ6EvXz9LRH2SaSRUmz3j792WcV2TPXqbSw5l/LnsDOU8yWZDvXs7bC1QvXbk3MROMxJFPB7IempaIlC/V2M/Bzgj2Y774WimPf4k9OcTtPYgoW/SryVpxp7yNupmCZ/Fe6om7oekrpRoy5JS/dUd/vt+FBvtzJwhj9yJ+hO4lXXYXfnT4j+zSKsT9S/uHLTkHDW/NQQydwF/9d3Yj06F+5yAvxpJdgD2pa2wXwnYDwDZSkStgJKaWgGpgexJVL/wMTuVvUWXp56n5qpZTV9eGwWyt6iHkXUdzo9F7/75fP+Uf7trQ9dug3+vpmpcpF3LVHde4/LruYx1EuNJLuMJibGby9glMaq5jCoixhtyhtJiU9kqepITT9dXpj3ZdkuIMURr9qKx1s67F/EwxGWAtkzwbgVgrRBYqwBrlcBaA1hrBNY6wFonsDYA1gaBVQZYZQJrE2BtElgPANYDAushwHpIYG0BrC0C6xHAekRgPQZYjwmsJwDrCYG1DbC2CawKwKoQWDsAa4fA2gVYuwRWFWBVCaw9gLVHYO0DrH0C6wBgHRBYhwDrkMA6AlhHBNYxwDomsE4A1glp5K4DtDrBswbAahBYTYDVJLBaAKtFYEHzJo/AOgNYZwRWG2C1CaxzgHVOYHUAVofA+ghgfURgPQNYzwisLsDqElgXAOuCwPIBlk9gQdcfPQIrAFgBgfUxwPqYwOoDrD6BNQBYAwJrCLCGBNYIYI0IrEuAdUlgXQGsKwJrDLDGBNYEYE0IrCnAmpJG7mm4GptMe8riqw3xVVu8l8NoJR7iejGPcbMNs/LlKonkGhl+/uHJNU2IbeTwo2ud6d1lNzsuiZ+fiPVXjOdxSfyMJZCrtuIeCGhEqGWk8VGDK/spuexxXCpV7TpjyHFJ/Kynh2IbOfy8pS73qGF2XBI/k6lLBnxdUEvJ4uc30AhUC2XwsxyY6JOIY3DWWgtl8DMemBiQiH25mwIxtRR+1tIJ9yUhclwS3+bqiLrSUvgZDaV/S0vja/AcVYfnpFpsoqhNIlXlEPZ1QKKKdE+ObxA5Lomll9EjbFwSS19Dj7BxSfzKGnYs2ZtjLHlE6JOTsviVLjhajkixUkEQKyRiFd0zVefomfZJfUhaGl8qmNZeIbb2Cqq1V4itfRvd2uOS+CvHurzrgFLq+To0i/hZX1aeZgkz4sclaXTc6J+UpVnAzwSy8tSSwrSKpKyysOS00pTXkGZnWl+7qnTcnrSSXbEyMHvRSnbVysDsQSvZNSsDs/esZNetDMyes5LdsDIwe81KtmxlYPaYleymlYHZW1ayD6wMzJ6ykn1oZWD2kpXslpWB2UNWso+sDMzesZJ9bGVg9oyV7BMrA7NXrGS3rQzMHrGSrVgZmL1hJbtjZWD2hJXsrpWB2QtWslUrA7MHrGT3rAzM3q+S3bcyMHu+SvbAysDs9SrZQysDs8erZI+sDMzerpI9tjIwe7pK9sTKwO3l6twMYzsYeTlyrw3jcuuycbwQG0fR+jMtHx4xH3Yb9nzEbbismHmj2Uu9kvfeenJNe5iSgv02ssMMcSjnw+IVlZpdDXX5m7d2SrcA+V/EypSQC/xKtJ5Juqhmton1dR3lJeVOwDKKSLmHbx9FxPT8rtxi8viRQ5+y798gRAh9n6hBjPG0BZeNeF9k7mhJ9lCtGOnzuj9cfbt9V6uHdflcgH5CxozBWM1JQvOYoDlNaJ4Amubp2Vb4tIMqWZeOWJkQa9FN+VdrXZC1zkNPs1GZZ8FN92VsDcJ7XSBffGaeU2gSS0jXaTr9NCdGMbvvac6EwMfsyKc5UwIft0vflzpify++kiGs9MGY0JozJwnHcMVTPhdLvuA/aj/a5edp2N+Zfg5qD11HCeTR3H2AeIJSPYM7ydQE9OyQOptAPBmp7F8yL8OAnhRKejADvHrKarLFDklUvQPqIudFsp7PzmdV3wtJtdoqbHWYiunk2Oqq80X4kxcVpiygmHlRHpgYmMcDaiTYysDe28BtyVUylAix++CKDIqF+eOP2qrny4urhqltbXEexOvyFtoHM8oU7eXmtVu0n6OUeHpMdZWxixrI56d70f2Bao/JrdGT7d5Lac7rgcdbeCfUbiJKEBrJx9HomZ5NTsAynlp0pwhdPbPN35cag/oTp762Dz3trFfd4Ked9boW/LRzEJVpYJmn30HNpYOofPM5UzRn7PQG54nLDwzBlRPKvb5jlr07d0zQTd+BOwbr01gcF7gL2Ni2UTC92iTyZVLAl0nki42C8SV5x4CplWT6/F6m+R6Bj/Hf+OwjJL1QEooWJTsKpV3jJo4UIEi0ORY8AuXb6pNmsmmr84311Pmz2as5KhB5ZqfERqE8jXbEsj1WEe/i3HRvtgh/Jzn+Fult4ty0v0X6n7KMJMEroyTtM25NguZKZswwc2aTVrzkx7E5McSllPwYuLKKj6PYMhjllMFoAWUwyikDG5dSBpc5/l4uwN/LHH9tXIq/qnztMWvkoD1VfKkXsYbJm4mZIpFiIgSOC/f4qjm20RWTJz/KU5ExWoyI8fMpzWlELh1zgpHQaIA2jLzSgEZTcYWpT47F7F74DFpFFLmCyto9qxBa6t7JNXAGomR1/awViDnN0hFjY+FyoVuTfS8jLoNrwem1fbcN7HwqS0mupGByQ7WQ7U9dOyDFZqlZC9lRgmKdNlvNWoB74EWW8rk1p+cFesT8MqXbooxfyXylV/tcUbzofBa3Tcm3eqYh22JUevG8KU62Tbj4NP8HFv8HC/J/YPHfzo8/e223sCL3VNthWt4q6AriRLsz1gr3bPrytPH4mm6Stsyq0Wl/0Am0Z9Z+5Azh0Uh6Yc4MF1rYHi2tnWwNZ0RtKBfu9TYdVcVW7aAZzbjQjNhcSRa5fpxGeV3EOQVwnpNXjIuw6Mt7Q4pZxK1Zxmdx+v8ia5TxGaabR+kXk2uc2f5xUWusdnu42kg/Nbao1d54Db2I1eR4jS1mNTnrv30u9yJLzV1vi7Ns5jV58/VFcPNm4vPXz3Lk53IB75Yjr2wUjC9HkS/FVqmh+oafTXDzdV4XswqejZPFrILje6ti5WEo2chcRAmVc0qovAB/yzn+2rgUf5dz/C3Stgwj62+R1qYZW9LfrM9bjuuMOwXzomxm80O1Sc8n3h6ei88Hbb8p6+9i9pvwPcOkUM+QjKwXEU/JOHoR0bOF8h+imLosdn8CVHPF6svc2Vtk/9F+RyR895P7LkR77jB+6ec/4vfOqpT5S0w/GZJmHhdmnmSYJ4WZ8RHanXd8eR5nmLa845knGaYt7xSmu8czT+pAKygVps8droRe6RUg+N6VClN7NBX0ylEQWQvI1vSpbQHamh9aw9zPU2eYe4R0nDVScQfxtVw9pYe1hqntI0QZGj+wROzaji4R/B13RrKe0cVpbvK2o/uV/GtwI4mfUZyjqeeouI1z46ftQVzsvB/rLeXJ1Taa2iaVQRtdBpgTt32m1+R74ep6l6lPQtOnM+N2aPcY5nlWygn/bRSRcqr4JopIjfPFxuIKiog59ceVW0weXXmj5GiTYZ9FppU8lkrxdQ1NpXzWziGaSvt0CCwVc5YI5CPGM6j0MGW2AzAwZ97sOCIXow9FLCZOodrB1MlD63XPQ4Sua3YC6x9YbR+AuhtW3Q1Qt2zVLYO6W+H5U+mrgq3w9ClI174PgSW4Z4QYipm/dRz+xKWwRHOeOcxNnn1Oo+PWkfAWMH7jaeaUQNc4SzlLMH4KtYuZd1o1pmSrmeuCau61L+WzbarsCsWkfObBYcbPw8J+Hmb8zGdi/NT+ze+V9mZ+H0xpd5G12iXWaxdZs11i3Wb9zS8Fmr+HOf7auLg67jprGeed8aqIL1c8XX+C8CyMD6y8rae7ivoN95PE4o6r9H3SyVOxXZ7ka5vTqly6q2ydzxC2w9PxzOeQq5JblbOH00z6IViaFYBcKcDW5G2Sz9sEn/PJdp9h9oazNDYKlLObXCnAFuQVp9cr1vLAeu7iVwpaWObztweyTvPo8XchT92kSobloiXvHHwM2C6HfclyNFfWz3yJnu62TH8HzIGmPI4+XThJEel4ykjm1ObROtEnF8tP0dzrH4LWZnp9LN0Xt2UcQavdQl99zrl6UrAtdZ6yHyJmvdPItpnfnrIS14asDuSnTKQ/+1xFl3kP8tzYjz8z4a4DkeaHn8tlLMdPVlY1kfZKPDFjakWfHbfE66oXai/xlitKscN/N+S4NGDvyRO4taxuCSJVnfIiWpV5f4//f8R+JC3rkyaE7NsyRax6vh2TXpe5eu/0xs17d0vya8n+4uD+3Xs/vnt/5yc33/8FU1+vsG9z3m12j73L3mebvFXv85z8iv2R/YX99df/fP7y81efv65EX34p1HmLJb6ef/P/i9dl0A==</latexit> a <latexit sha1_base64="MoFVAJMAn0YI/DAGDrQZ+yOSN10=">AACLQnicvZ1JcyNJFYBzhm1olplhLkRwMdM09BBNh7qBGKIJIsar3O3utmx5H087JLksq1su1WixJXn0B/rEjQBOEMGB4GfAgT/AYX4CcCKGCC5zIJfKytoy33uq6rHDtpx673svM18ulVmVagbdzmBYqXz62utf+vJXvvq1N75+4xvf/Na333zr7e/sDXqjfsvbbfW6vf5BszHwuh3f2x12hl3vIOh7jYtm19tvvlgW7+9fev1Bp+fvDCeB99FFo+13zjqtxpAn7R03e93TxslbNyt3K/JrIfviXvjiJgu/ar23V99hx+yU9ViLjdgF85jPhvx1lzXYgH9/yO6xCgt42kfsmqf1+auOfN9jM3aD6464lMclGjz1Bf/d5v99GKb6/H/BHEjtFrfS5T99rrnAblX+Uflz5bPK3yt/qfyz8rmVdS0ZwpcJ/9tUul5w8ubL79b/B2pd8L9Ddm60nBpd/kr415E6FzKnd3LSBc2V9yE7Y7+Qee7wMghkiiiNlvLjcvqbz+oPtm9d/7Dyx8q/eTn8ofJp5a+8JPzL/7b+tOVt/57TBd/nWlfSpifte9z2Na8tj/M7nHXNVsLXvvStw6X8sF7ydQP+qsd/z7ifmiP8rIXpPZ4jHKkr/15kSI/D9IZTW/+f1t5JyLkILfk6S1gO07syhvs8alwUj41lpKQpq2F6EIv0fMKZrP1hhrAWpdt1dWSldRdzIu4Gu+UoC5HXTk4+lmPvCMJCxBDlp+rJ5xrXPF1TVYz0OGMm2/9H8l2fp3SkrOojRFqTyyyw22H09OR/6neyHhfYTc55j/+dIfzQMVqGH/lxTfNHR3oZ/pjWEffhmH/P44dfuiewF7rdlVEa2baaVypQpCgf8qwfy353zH8LH64TPrwn+2Is35Npysq5bF3C5x/w/5b4+2P56pLHmBoLxGhyP+wd4RLd4r3NSsju87GiK/Xfl6PuLPZqlhgVspwB7zP8kKPHvb7sH/Q77tzqfsOXPZ4iDsOxTbTjbg5Z6cyk/K
1. What is the focus of the paper, and what are the proposed solutions to the featured robust optimal transport problem? 2. What are the strengths and weaknesses of the paper, particularly regarding the presentation and the convergence guarantee? 3. How does the reviewer assess the potential applications of FROT, and how does it compare to other works in the field? 4. What are some suggestions for improving the introduction and presenting the contributions in a more organized way? 5. What is the significance of the high-dimensional aspect of the problem, and how is it addressed in the paper? 6. How does the reviewer evaluate the robustness of the proposed methods, and what are some potential concerns?
Review
Review Summary: The authors try to solve a special kind of high-dimensional optimal transport problem. Specifically, they consider the cases when features are grouped and the grouping is known a-priori. The authors formulate the problem into the feature-robust optimal transport (FROT) problem. The authors propose two solving algorithms, one based on the Frank-Wolfe method, and one based on linear programming. Pros: The connection to the feature group sounds interesting to me, as it has a natural connection to the structure of deep learning models. The presentation (other than the introduction) is easy to follow. Cons: Note that the first point is the main contributing factor for my rating. Section 3.1 is very confusing, and it seems to me that the authors fail to establish the correct convergence guarantee. As in page 4, the target is m i n π m a x α J ( Π , α ) If we fix π , we can solve for the optimal α . Plug this optimal α back in and we obtain G ( Π ) . Intuitively one may choose to solve for α and π alternatingly. However the convergence of G ( Π ) says nothing more than, in a fixed iteration, one can solve exactly for the optimal α and up to ϵ accuracy for Π . We still don't know if the solution of the algorithm indeed minimizes the said loss. I checked the proof of proposition 4. It just invokes the standard FW-convergence analysis from Jaggi 2013, and argue nothing about the alternative part. Note that, even though the two subproblems (for α and for π ) can be solved almost exactly, it could be non-trivial to set up the convergence of the entire alternating algorithm. Alternatively, maybe the authors want to argue that solving m i n π m a x α J ( Π , α ) is equivalent to solving max π G ( Π ) . However, this is also not obviously true for me. What are the other potential applications of FROT? While Semantic Correpondance is an interesting application, I find it hard to convince myself that FROT is better than Liu's 2020-CVPR work (requiring validation dataset is not a big problem - you can always to train-val split). With its similarity to group lasso, FROT might have more interesting applications. Presentation of the introduction can be improved. I find it hard to parse the introduction until I almost finished reading the entire paper. Putting figure 1 to page 2 only creates more questions in my head instead of offering intuitions. Also, it would be helpful if the author can list their contributions in a more organized way. I didn't quite get the high dimensional part. While 'high-dimensional' appears in the abstract, introduction, and conclusion section, I didn't find the correspondence in the main text. I didn't get the robust part, other than the empirical performance in the evaluation section.
ICLR
Title Deep Convolution for Irregularly Sampled Temporal Point Clouds Abstract We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds. The model also easily incorporates the notion of multiple entities in the process. In particular, the model can flexibly answer prediction queries about arbitrary space-time points for different entities regardless of the distribution of the training or test-time data. We present experiments on real-world weather station data and battles between large armies in StarCraft II. The results demonstrate the model’s flexibility in answering a variety of query types and demonstrate improved performance and efficiency compared to state-of-the-art baselines. 1 INTRODUCTION Many real-world problems feature observations that are sparse and irregularly sampled in both space and time. Weather stations scattered across the landscape reporting at variable rates without synchronization; citizen-science applications producing observations at the whim of individuals; or even opportunistic reports of unit positions in search-and-rescue or military operations. These sparse and irregular observations naturally map to a set of discrete space-time points – forming a spatiotemporal point cloud representing the underlying process. Critically, the dynamics of these points are often highly related to the other points in their spatio-temporal neighborhood. Modelling spatio-temporal point clouds is difficult with standard deep networks which assume observations are dense and regular – at every grid location in CNNs, every time step in RNNs, or both for spatio-temporal models like Convolutional LSTMs (Xingjian et al., 2015). While there has been work examining irregularly sampled data through time (Rubanova et al., 2019; Shukla & Marlin, 2018) and in space (Wu et al., 2019), modeling both simultaneously has received little attention (Choy et al., 2019). This is due in part to the difficulty of scaling prior solutions across both space and time. For instance, voxelization followed by sparse convolution (Choy et al., 2019) or dense imputation (Shukla & Marlin, 2018) now face a multiplicative increase in the number of cells. Rather than forcing irregular data into dense representations, an emerging line of research treats spatial point-clouds as first-class citizens (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018). Several works directly extend 2D convolutions to point clouds (Simonovsky & Komodakis, 2017; Wang et al., 2019; Hermosilla et al., 2018), with (Wu et al., 2019) being the first that allows efficient exact computation of convolution with dozens of layers. In this work, we build on this line of research to model spatio-temporal point clouds. Specifically, we extend the work of Wu et al. (2019) with an additional module to reason about point representations through time. Our new model, TemporalPointConv (TPC), is a simple but powerful extension that can learn from an arbitrary number of space-time points. Each layer in TemporalPointConv updates the representation of each point by applying two operators in sequence – one that considers the spatial neighborhood in a narrow temporal window and another that models how this spatial representation changes over time. By factorizing the representation update into separate spatial and temporal operators, we gain significant modeling flexibility. Further, by operating directly on point clouds, we can predict observations at arbitrary space-time, regardless of the distribution of observations. We demonstrate TemporalPointConv on two distinct problems: 1) predicting future states of a custom Starcraft II environment involving battles between variable-sized groups, and 2) predicting the weather at stations distributed throughout the state of Oklahoma. Further, we show the utility of these networks in identifying damaged or anomalous weather sensors after being trained exclusively on the associated prediction problem. The results show that TemporalPointConv outperforms both state of the art set functions and a discrete sparse convolution algorithm in terms of raw performance, ability to detect anomalies, and generalization to previously unseen input and query distributions. 2 RELATED WORK Xingjian et al. (2015) gives an early approach to spatio-temporal modeling via convolution by incorporating a standard convolutional structure into the latent memory of an LSTM. This approach is appropriate for situations where the data is regularly sampled in both space and time, which is different from our setting. Interaction networks (Battaglia et al., 2016) and related approaches allow for modeling sets of interacting objects or points over time, with an original motivation to model physics processes. These models are more flexible in their modeling of spatial relationships among points. However, there is an assumption of uniform temporal sampling, which is violated in our setting. A significant amount of work on spatio-temporal modeling for non-uniform spatial sampling uses Graph Convolutional Networks (GCNs) for modeling spatial interactions. For example, Li et al. (2018b) used a GCN followed by an RNN and Yu et al. (2018) used GCNs for spatial correlation and temporal convolution for temporal correlations. They require sampling at continuous temporal intervals and did not deal with generalization outside the fixed given graph. Rather, our approach generalizes to any spatio-temporal point outside of the training data. Yao et al. (2019) introduces an attention model to deal with dynamic spatial relationships, however this is only possible for the dense CNN version in their paper, whereas their version with irregular spatial sampling utilizes the GCN and shares the same issues with the above GCN approaches. PointNet (Qi et al., 2017a) sparked significant interest in networks for 3D Point cloud processing. A number of networks have been proposed (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018) with the highest performing using either sparse convolutional networks (Graham & van der Maaten, 2018; Choy et al., 2019) or point convolutional networks (Wu et al., 2019; Thomas et al., 2019). Set networks, such as DeepSets (Zaheer et al., 2017b), are similar to PointNet (Qi et al., 2017a) with neither explicitly considering neighborhood information of elements/points, making them less powerful than convolutional methods. Recently, Horn et al. (2020) proposed a set network approach for non-uniform time-series prediction, which encodes time into the feature vector of points. Our experiments show that this approach is outperformed by our convolutional method. Sparse convolutional networks are similar to dense volumetric convolutional networks that use a regular grid to discretize space-time, but they are only computed at locations with occupied points. Minkowski networks (Choy et al., 2019) is a sparse convolutional network that models spatio- temporal correlations by concatentating the spatial location and time for each point sample into a 4D tesseract. It is thus sensitive to an appropriate resolution for the discretization since excess sparsity can result in empty neighborhoods and trivial convolutions, and too coarse a resolution may result in an inaccurate representation of the data. Furthermore, the approach has difficulties accounting for the case where points should be treated as moving entities themselves. On the other hand, point convolutions discretize 3D volumetric convolutions on each point directly and hence easily generalize to the entire space under irregular sampling density. Early versions (Simonovsky & Komodakis, 2017; Hermosilla et al., 2018; Wang et al., 2019) require explicit discretization hence cannot scale to large networks. Recently, PointConv (Wu et al., 2019) proposes an equivalent form that avoids explicit discretization and significantly improved scalability. However, so far it has been applied only to static point clouds. Our work builds on PointConv, by extending it in the temporal direction and demonstrating that space-time convolutions can be effectively learned and used for modeling and anomaly detection. On the temporal side, a significant amount of recent state-of-the-art were based on point processes which studies time series models from a statistical perspective (Du et al., 2016; Li et al., 2018a; Zuo et al., 2020; Zhang et al., 2019). These support irregular temporal sampling, but generally do not consider the spatial correlation among points. 3 PROBLEM SETUP We consider extrapolative tasks in which the value at new locations must be inferred from existing observations. Let P be a spatio-temporal point cloud with each individual point pj ∈ P defined as pj = (lj , tj , oj) where pj exists at location lj at time tj and has associated features oj (e.g. temperature and humidity values for a weather station). Further, let Q be a set of query locations at which the model is to make predictions given P . For example, a forecasting model might be given queries qk = (lk, tk) for locations in the future and be tasked with predicting the corresponding features ok representing the desired properties to be predicted. We place no restrictions on the regularity of either P or Q such that this corresponds to a setting where both input and output may be sparse and irregularly sampled through space and time. Further, query points may be in the future, the past, or concurrent with those in P – corresponding to weather forecasting, backcasting, or nowcasting respectively. We aim to train models that can accurately answer queries as represented via training set of point-cloud / query-set pairs D = {(Pi, Qi)}Ni=1. 4 TEMPORAL POINTCONV ARCHITECTURE Given a spatio-temporal point-cloud containing points pj = (lj , tj , oj), a Temporal PointConv layer is an operator that produces an updated point representation p′j = (lj , tj , o ′ j) for each point. The updated feature representation o′j incorporates information from a spatio-temporal neighborhood around pj . This is accomplished by applying two point-based convolutional operators in sequence for each point – first a spatial PointConv over points within a narrow temporal band, and then a temporal PointConv over points within a narrow spatial band. These Temporal PointConv layers can be stacked to arbitrary depth. Below we give background on PointConv and describe our model. 4.1 PRELIMINARIES: POINTCONV PointConv is based on the idea of discretizing continuous convolution on irregularly sampled points: Conv(P,p0;w, d(·, ·)) = ∑ pi∈Nd(p0) 〈w(pi − p0),oi〉 (1) where P is a point cloud with features at each point, w(·) is a vector-valued weight function of the positional difference between a point pi in the neighborhood Nd of a centroid p0, defined by a metric d, and oi is the input features at pi. w(·) can be learned with a neural network (Simonovsky & Komodakis, 2017). PointConv (Wu et al., 2019) introduces an equivalent form so that w does not need to be computed explicitly, saving computation and memory. This approach is flexible since w(·) as a function can apply to any point in the space of P , hence convolution can be computed over any irregularly sampled neighborhoodNd. We note that this even holds when we did not have any feature at p0, since a neighborhood can still be found even in this case and eq. (1) can still be used. Previously, PointConv has only been used in spatial domains in cases where p0 has features associated with it. In this paper we generalize it to spatio-temporal neighborhoods and to p0 that are featureless query points. For expositional clarity, we denote PointConv as an operator that transforms a feature-augmented point-cloud P into a new point-cloud P ′ consisting of points at target locations Q with eq. (1): P ′ = PointConv(P,Q; d(·, ·)), where we will omit Q if Q = P . 4.2 TEMPORAL POINTCONV Given a spatio-temporal point-cloud Pin = {(lj , tj , o(in)j )|j} and set of queries Q, the Temporal PointConv operations considers the relative position from each query to the elements of Pin and their representative features to produce a set of predictions X corresponding to the query set Q. Spatial Convolution. First, each point’s feature is updated based on the spatial neighborhood of temporally co-occurring points. However, as the points may be irregularly spaced in time, there may be no points that precisely co-occur. We instead consider those in a fixed window of time. Thanks to the flexibility of PointConv operations, we describe this by defining the piece-wise distance function: dspatial(pi, pj) = { || li − lj ||2 if |ti − tj | ≤ t ∞ otherwise . (2) We then apply a PointConv operator to update features: Pspatial = PointConv(Pin; dspatial), where each point in Pspatial has updated feature (li, ti, o (s) i ). Temporal Convolution. We then perform an analogous operation through time. We would like to consider the past and future of each point; however, this requires determining correspondence between points through time. If the underlying point-cloud represents static points such as weather stations, this can simply be based on a small spatial window. If the points correspond to known entities that are moving, we instead assume tracking and can use those entity labels to determine temporal neighborhoods each consisting exclusively of a single entity’s samples throughout time. For clarity, we present the distance function for the first case below: dtemporal(pi, pj) = { || ti − tj ||2 if || li − lj ||2 ≤ s ∞ otherwise . (3) Before applying the temporal PointConv, we first apply a residual connection for each point, concatenating the input and spatial features. We denote this as Pres = {(lj , tj , [o(in)j , o (s) j ]) | j} where [·, ·] denotes concatenation. As before, we apply a PointConv operator with kernels defined only over differences in time as: Ptemporal = PointConv(Pres; dtemporal(·, ·)), where Ptemporal = {(lj , tj , o(tmp)j ])|j}. Combined Representation. To compute the final output point-cloud, we concatenate the original, spatial, and temporal representations and transform them through an MLP f such that Pout = {(lj , tj , f([o(in)j , o (s) j , o (tmp) j ]) | j}. (4) We denote multiple stacked layers via P (d+1) = TemporalPointConv(P (d)). 4.3 EXTRAPOLATING TO NEW POINTS After applying one or more layers of Temporal PointConv as described above, we apply one final query PointConv to the latent spatio-temporal point cloud Pout resulting from this encoding process. For this, we define a new problem-dependent query distance function dquery(·, ·), which could be dspatial, dtemporal, or a combination of both. This enables us to calculate a corresponding latent feature y for the each query point. Y = PointConv(Pout, Q; dquery(·, ·)) (5) Finally, we apply an MLP g to transform each latent query representation into a final predictions X = {g(oy)|y ∈ Y } corresponding to the set of queries Q. 5 EXPERIMENTS We consider two problem domains for our experiments which we describe below. Starcraft II. To evaluate TemporalPointConv on entity-based dynamics, we designed a custom Starcraft II scenario in which two opposing armies consisting of random numbers of three distinct unit types are created and then fight on a featureless battlefield. Each episode is allowed to run without any external influence until one team has been eliminated or the time limit expires. This allows us to learn the dynamics of a battle between a large group of units without any confounding factors such as player inputs. We use the PySC2 library (Vinyals et al., 2017) to record regular observations of the game state as each episode plays out. We use these regularly sampled episode histories to generate individual training examples. Specifically, we select a ‘reference timestep’ t within the episode, sample a set of ‘history offsets’ H from a provided history distribution, and a set of ‘query offsets’R from a provided query distribution. We collect unit properties corresponding to these sampled relative time steps to serve as point features. We determine the prediction targets with the same procedure using the sampled query offsets. This procedure is used to sample an arbitrary number of training examples from the set of episode histories by varying the reference timestep t and re-sampling the history and query offsets as desired. Following this procedure on our dataset of 92,802 episodes yields 2.5 million training examples. We define the ‘property loss’ for a unit state prediction as the sum of the mean squared error of each of the unit’s predicted numeric properties (i.e. health, shields, position) and the cross entropy loss of the unit’s predicted categorical properties (i.e. orientation). Similarly, the ‘alive loss’ is the cross entropy loss between the network’s alive/dead prediction values and a flag indicating if the unit was present and alive in the given timestep. We then define the total loss for a set of unit state predictions as the sum of the alive loss for all units and with property loss for every unit that is actually alive at the given timesteps. This additional condition is necessary due to the fact that dead units do not have recorded properties we can use to determine property loss. As PySC2 assigns a unique, consistent ID to each unit which provides perfect tracking across all timesteps, we use an entity-based temporal distance function when instantiating the query PointConv layer for this problem as described in section 4.2 above. Weather Nowcasting. To evaluate the ability of the TemporalPointConv architecture to reason about spatio-temporal dynamics, we derive weather nowcasting problems from a dataset of weather conditions as recorded by weather stations throughout Oklahoma. The original dataset consists of weather sensor readings from each weather station every five minutes throughout the entirety of the year 2008, associated quality metrics for each sensor in each reading, and metadata about each weather station such as its position and local soil properties. 10% of the weather stations are randomly selected to be held out as test stations and excluded from the training process, while the remaining 90% are used to generate problems for training. We derive training problems from the larger dataset by selecting a time point t and randomly selecting 10% of the remaining training stations to be targets. All non-target training station readings and their associated station metadata within the hour preceding t are collected as input weather data. Any sample within the collected data with an associated quality metric indicating a malfunctioning or missing sensor is discarded. Furthermore, we randomly discard an additional 20% of the remaining samples to decrease the level of time synchronization in the input. Following this procedure on our dataset of weather sensor readings results in over 14,000 training examples. The model is then tasked with predicting weather properties at time t for each of the target stations using the provided input data from the preceding hour. Specifically, the networks are evaluated on their ability to predict the relative humidity, air temperature, air pressure, and wind speed at each specified target location. We define the prediction loss as the sum of the mean square error between the network’s prediction for each of these properties and the actual recorded values. Due to the large difference in magnitudes between these readings, normalize each prediction and target measurement value such that the 10th percentile value to 90th percentile value of that measurement within the entire dataset is mapped to the range [0, 1]. This prevents the training process from naturally favoring measurements with a much higher average magnitude than the others. As our queries for this problem are purely spatial, we use the spatial distance function eq.(2) as the query distance function when instantiating the query PointConv layer for this problem. 5.1 BASELINE IMPLEMENTATIONS Set Functions for Time Series & DeepSets. Our Temporal PointConv architecture leverages PointConv as a convolution-equivalent set function. We can evaluate this choice by replacing each PointConv module with a different set function, such as DeepSets (Zaheer et al., 2017a) or Set Functions for Time Series (SeFT) (Horn et al., 2020). Whereas PointConv takes as input a set of point locations and a set of point features, SeFT and DeepSets only consume a single set of features. However, the neighborhood and distance function mechanisms introduced for Temporal PointConv can still be applied. Therefore, we evaluate the other set functions by simply replacing each instance of PointConv(P ) with SeFT ({[li, ti, oi]|i}) or DeepSets({[li, ti, oi]|i}). Minkowski Networks. We evaluate Minkowski networks (Choy et al., 2019) by replacing each spatial-temporal PointConv step with a Minkowski convolution layer that operates on the combined spatio-temporal vector space inhabited by the raw input samples. This necessarily requires discretizing said vector space into a sparse voxel grid. We choose a voxel resolution of 6km for the weather domain, and 0.05 in game units for the starcraft domain. We use nVidia’s MinkowskiEngine codebase to provide the Minkowski convolution implementation. We trained Temporal PointConv (TPC), Set Function for Time Series (SeFT), DeepSets, and Minkowski networks instantiated with the hyperparameter settings described in appendix B on both the Starcraft II and weather nowcasting domains. For the Starcraft II domain, models were trained for one epoch (owing to the massive size of the generated Starcraft II dataset), whereas for weather nowcasting they were trained for 24 epochs. All networks were trained with a cosine learning rate decay with warm restarts configured such that the learning rate cycles from its maximum value to its minimum three times throughout each training run. 5.2 RESULTS Dynamics Prediction Accuracy. To evaluate prediction accuracy, three of each model were trained on both domains. Unless otherwise specified, the Starcraft history distribution was set to be a uniform distribution over [−10,−1] and the query distribution was set to fixed time offsets {1, 2, 4, 7}. Figure 2 shows the validation loss for each model throughout training, and tables 1 and 2 show in detail the average error across each individual query the final trained networks predict for the test datasets. Our results show that TPC is significantly more accurate than the baseline algorithms, es- pecially on the Starcraft II unit state prediction problem. In all cases, the Minkowski network was unable to outperform either of the set function-based models, and in the weather nowcasting domain it consistently failed to find a good solution, as indicated by the loss orders of magnitude higher than the set function approaches. We believe this failure is due to the difficulty of selecting a suitably sized kernel and voxelization resolution for a spatio-temporal problem at the scale of an entire state. We were unable to increase the size of the kernel without driving the network’s parameter count prohibitively high, and we were unable to decrease the resolution of voxelization without starting to ‘lose’ a significant number of weather stations which would be occupying the same cell. This result suggests that applying ‘true’ point cloud convolution that directly exploits sample positions is preferable for these domains, as opposed to discretizing or voxelizing the samples’ locations so that a traditional fixed-size filter convolution such as Minkowski networks can be applied. Impact of Train and Test Distributions. We investigate the robustness of TPC to a change in the distribution of input samples or query points. Since the TPC architecture is completely decoupled from the distribution of the input samples, we can accomplish this comparison by simply defining several distribution types, training a model with each type of input distribution on the Starcraft II domain, and comparing the results after evaluating each trained model across each of the input distribution types selected for evaluation. We selected four input distributions for evaluation: Two ‘fixed’ distributions that always return the same set of time offsets, the uniform distribution over the range [−10, 0], and half of a normal distribution over the range [−10, 0]. Figure 6 visualizes the difference between these distributions, and presents a bar chart plotting the average loss when each model is evaluated on each distribution type. In all cases, the query distribution was kept constant and fixed. The results show that TPC and SeFT trained on fixed distributions perform poorly when evaluated on any distribution it was not trained on, while the Minkowski network suffers much less of a penalty despite worse absolute performance. Alternatively, the networks trained on the uniform and normal distributions suffer much less degradation when switching to different input distributions. The only case with a noticeable performance drop is for networks trained on the normal distribution and evaluated on the uniform distribution, which is unsurprising since the normal distribution is biased toward t = 0. We perform a similar experiment to evaluate the behavior of TPC when trained on different query distributions. Figure 4 visualizes the query distributions selected for training alongside a plot of the average loss for each query by their offset from the reference time (e.g. t = 0). As before, the models trained on fixed distributions only consistently perform well on the exact query points they were trained on, with the model trained on Fixed1 distribution’s prediction error rising sharply as the distance from its small cluster of expected query points increases. In contrast, the model trained on the variable distributions saw a relatively small increase in prediction error, even for query points that are outside of the range of query points it was trained on. This suggests that the ability to train the TemporalPointConv architecture on randomized input and query distributions is key to enabling it to generalize well across timesteps and behave reasonably in off-distribution scenarios. Application to Anomaly Detection. We now consider the utility of our TPC model for anomaly detection, where the goal is to detect which samples in a temporal point cloud are anomalous. We focus on the weather dataset, where anomalies correspond to broken sensors. We introduce anomalies to the set of test station samples by randomly selecting 33% of the stations. For these, we randomly increase or decrease the value of one station property by a factor of 25%. The models are then tasked with predicting each of the test samples’ properties given the preceding hour of weather data. Their prediction error on each individual sample is then used as an anomaly score for detection purposes. As expected based on prior prediction results, TPC significantly outperforms SeFT owing to its superior nowcasting accuracy with an area under receiver-operator curve (AUROC) of 0.927 compared to SeFT’s 0.836. The Minkowski network struggles to perform above chance level. See appendix A for the complete ROC curves. 6 CONCLUSION In this work, we proposed a novel extension to the set function PointConv that enables it to be composed with standard deep learning layers to reason about irregularly sampled spatio-temporal processes and calculate predictions for arbitrary domain-specific queries. We show that TemporalPointConv’s ability to directly consume each sample’s positional and feature data without downsampling or discretization enables it to significantly outperform state of the art sparse convolution algorithms across two complex, meaningfully different domains. Similarly, TemporalPointConv’s equivalence to standard convolution enables it to more efficiently reason about relative spatial and temporal relationships than other set functions which are not endowed with these useful properties. These promising results and TemporalPointConv’s flexible parameterization suggest that it can be effectively applied to a wide range of problems with an irregular structure that prevents most other deep learning approaches from functioning efficiently. A ANOMALY DETECTION ROC CURVES B HYPERPARAMETER SETTINGS C JOINT SPACE-TIME NEIGHBORHOODS Though TemporalPointConv decomposes spatio-temporal processes into separate ‘space’ and ‘time’ neighborhoods, this is not strictly necessary. Space and time could be combined into one single vector space, allowing for a single PointConv layer to jointly consider samples’ spatial and temporal distances to determining their local neighborhood. We investigate this possibility by training TemporalPointConv networks to do exactly that. This requires specifying a space-time distance function which we define as follows: Dst = √ D2s + xD 2 t where Ds and Dt are spatial and temporal distance functions, respectively. x then represents the tradeoff factor that dictates whether distant spatial samples should be favored over temporally distant samples when constructing a neighborhood. Specifically, we test three values for x for these ‘combined’ PointConv models: 0.2. 1, and 5. The results in figure C show that all of the networks with combined spatial-temporal neighborhood functions were outperformed by our approach which considers spatial and temporal relationships separately but sequentially. Additionally, this combined distance function depends on a hyperparamter x which is likely domain-specific and nontrivial to find a good value for. These results validate our decision to treat spatial and temporal distances separately.
1. What is the focus of the review, and what are the reviewer's main concerns regarding the paper? 2. What are the strengths and weaknesses of the proposed approach in the paper, according to the reviewer? 3. How does the reviewer assess the significance and novelty of the work compared to other works in the field? 4. What are the questions or concerns that the reviewer has regarding the experimental results and comparisons with other methods? 5. How does the reviewer evaluate the clarity, quality, and reproducibility of the paper's content?
Review
Review Summary: This paper proposes a new spatial-temporal point cloud processing technique, which extends the prior work of PointConv for spatial point processing to the temporal domain. Through experiments on two datasets, this paper shows improved performance over a few baselines. Paper Strengths: The direction of spatial-temporal point cloud processing is indeed underexplored so I appreciate the efforts put into this direction in the paper. The ability to answer arbitrary prediction queries could be useful in different application domains Paper Weaknesses: The technical contribution is limited. Essentially, the proposed method is just to apply the existing PointConv operator in both spatial and temporal dimensions. Also, regarding the literature of spatial-temporal point cloud processing, it would be nice if the paper can add some discussion/comparison with PointRNN [A1], which I think is the closest work to this paper though to my knowledge it is not published yet (has been on arXiv for more than a year). The proposed method seems to be problematic in modeling temporal dynamics. Specifically, the PointConv operator in Eq. 1 is to aggregate features within a neighborhood (weighted summation of the features), which does not consider the order of the points in the neighborhood, i.e., permutation-invariant. In the spatial point processing case, this is fine in some applications where we do not consider the direction but only distances of neighborhood points. However, in the temporal domain (i.e., Eq. 3), I feel using PointConv does not make much sense as the order of time should matter in general. Essentially, Eq.3 tells us that, if the centroid point is at frame t, then its corresponding neighbor point at frame t-H has the same distance measure as its corresponding neighbor point at frame t+H, which means that the direction of time is not considered. I feel that the formulation needs to inform the network of the time flow, i.e., the order of time. Also, in Eq 2, the neighborhood also considers some points within a small time window but discards the order of the time again. As a result, I do not think the spatial-temporal point processing proposed in this paper is properly modeling temporal dynamics. Even simple GNN/PointNet + RNN [Li et al 2018b, A2] network structure considers the order of time in the RNN. A big problem I feel about the proposed method is that it relies on a strong assumption about the point correspondences in time, which largely limits the practical impact of this paper. In this paper, the proposed method is shown to reason for high-level entities (e.g., represent an object or a Starcraft unit as a single point), which is fine as the correspondences can be obtained by object tracking in practice. However, how about the scenario where each point does not represent a high-level object but represent low-level observations, e.g., 3D point clouds obtained by LiDAR or stereo reconstruction? In such cases, it is extremely difficult to obtain the point correspondences across time as there could be no correspondences at all in the real-world. Then the proposed method might have problems applied to such data which is widely used in many applications such as point cloud-based object detection, SLAM, point cloud-based classification/segmentation/prediction. In contrast, Minkowski Networks [Choy 2019] that this paper is compared to and also the PointRNN [A1] and SPF [A2] can be applied to these more challenging scenarios. I am a bit concerned about the dataset used in the experiments as the data in the Starcraft II dataset seems to be generated randomly, which is hard to be reproduced and compared by the follow-up work. Would it be possible that evaluating the proposed method on the test set of a few public benchmarks (e.g., traffic prediction) that others can easily compare with even without re-implemented the proposed method? The re-implementation of the Minkowski Networks is somewhat concerning. The paper claims that it is hard to find a suitable kernel size and vocalization resolution, and will lead to a prohibitively high number of network parameters if increasing the kernel size. This leads me to wonder how large scale the data is, e.g., how many points and how many frames the method needs to process. If the scale of the data is not too big, there should not be a problem for the Minkowski Networks I guess. In the original paper of the Minkowski Networks, they evaluated their method on a sequence of large-scale LiDAR point cloud data (usually with 50k number of points per frame), which seems fine and obtains strong performance. It would be nice if a more detailed explanation of why the Minkowski Networks cannot be properly tuned. Justification: My decision is made mainly because I feel the proposed method seems to have some flaws and limitations. Also, I am not fully convinced by the experimental results due to the concern about data and re-implementation, and the technical contribution is limited too. However, I would be happy to change my mind if there are any misunderstandings. References [A1] H. Fan and Y. Yang. PointRNN: Point Recurrent Neural Network for Moving Point Cloud Processing. arXiv 2019 [A2] Weng et al. Inverting the Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting. CoRL 2020 Post-rebuttal Review As there is no response submitted by the authors, I would like to stick to my original rating to reject this paper.
ICLR
Title Deep Convolution for Irregularly Sampled Temporal Point Clouds Abstract We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds. The model also easily incorporates the notion of multiple entities in the process. In particular, the model can flexibly answer prediction queries about arbitrary space-time points for different entities regardless of the distribution of the training or test-time data. We present experiments on real-world weather station data and battles between large armies in StarCraft II. The results demonstrate the model’s flexibility in answering a variety of query types and demonstrate improved performance and efficiency compared to state-of-the-art baselines. 1 INTRODUCTION Many real-world problems feature observations that are sparse and irregularly sampled in both space and time. Weather stations scattered across the landscape reporting at variable rates without synchronization; citizen-science applications producing observations at the whim of individuals; or even opportunistic reports of unit positions in search-and-rescue or military operations. These sparse and irregular observations naturally map to a set of discrete space-time points – forming a spatiotemporal point cloud representing the underlying process. Critically, the dynamics of these points are often highly related to the other points in their spatio-temporal neighborhood. Modelling spatio-temporal point clouds is difficult with standard deep networks which assume observations are dense and regular – at every grid location in CNNs, every time step in RNNs, or both for spatio-temporal models like Convolutional LSTMs (Xingjian et al., 2015). While there has been work examining irregularly sampled data through time (Rubanova et al., 2019; Shukla & Marlin, 2018) and in space (Wu et al., 2019), modeling both simultaneously has received little attention (Choy et al., 2019). This is due in part to the difficulty of scaling prior solutions across both space and time. For instance, voxelization followed by sparse convolution (Choy et al., 2019) or dense imputation (Shukla & Marlin, 2018) now face a multiplicative increase in the number of cells. Rather than forcing irregular data into dense representations, an emerging line of research treats spatial point-clouds as first-class citizens (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018). Several works directly extend 2D convolutions to point clouds (Simonovsky & Komodakis, 2017; Wang et al., 2019; Hermosilla et al., 2018), with (Wu et al., 2019) being the first that allows efficient exact computation of convolution with dozens of layers. In this work, we build on this line of research to model spatio-temporal point clouds. Specifically, we extend the work of Wu et al. (2019) with an additional module to reason about point representations through time. Our new model, TemporalPointConv (TPC), is a simple but powerful extension that can learn from an arbitrary number of space-time points. Each layer in TemporalPointConv updates the representation of each point by applying two operators in sequence – one that considers the spatial neighborhood in a narrow temporal window and another that models how this spatial representation changes over time. By factorizing the representation update into separate spatial and temporal operators, we gain significant modeling flexibility. Further, by operating directly on point clouds, we can predict observations at arbitrary space-time, regardless of the distribution of observations. We demonstrate TemporalPointConv on two distinct problems: 1) predicting future states of a custom Starcraft II environment involving battles between variable-sized groups, and 2) predicting the weather at stations distributed throughout the state of Oklahoma. Further, we show the utility of these networks in identifying damaged or anomalous weather sensors after being trained exclusively on the associated prediction problem. The results show that TemporalPointConv outperforms both state of the art set functions and a discrete sparse convolution algorithm in terms of raw performance, ability to detect anomalies, and generalization to previously unseen input and query distributions. 2 RELATED WORK Xingjian et al. (2015) gives an early approach to spatio-temporal modeling via convolution by incorporating a standard convolutional structure into the latent memory of an LSTM. This approach is appropriate for situations where the data is regularly sampled in both space and time, which is different from our setting. Interaction networks (Battaglia et al., 2016) and related approaches allow for modeling sets of interacting objects or points over time, with an original motivation to model physics processes. These models are more flexible in their modeling of spatial relationships among points. However, there is an assumption of uniform temporal sampling, which is violated in our setting. A significant amount of work on spatio-temporal modeling for non-uniform spatial sampling uses Graph Convolutional Networks (GCNs) for modeling spatial interactions. For example, Li et al. (2018b) used a GCN followed by an RNN and Yu et al. (2018) used GCNs for spatial correlation and temporal convolution for temporal correlations. They require sampling at continuous temporal intervals and did not deal with generalization outside the fixed given graph. Rather, our approach generalizes to any spatio-temporal point outside of the training data. Yao et al. (2019) introduces an attention model to deal with dynamic spatial relationships, however this is only possible for the dense CNN version in their paper, whereas their version with irregular spatial sampling utilizes the GCN and shares the same issues with the above GCN approaches. PointNet (Qi et al., 2017a) sparked significant interest in networks for 3D Point cloud processing. A number of networks have been proposed (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018) with the highest performing using either sparse convolutional networks (Graham & van der Maaten, 2018; Choy et al., 2019) or point convolutional networks (Wu et al., 2019; Thomas et al., 2019). Set networks, such as DeepSets (Zaheer et al., 2017b), are similar to PointNet (Qi et al., 2017a) with neither explicitly considering neighborhood information of elements/points, making them less powerful than convolutional methods. Recently, Horn et al. (2020) proposed a set network approach for non-uniform time-series prediction, which encodes time into the feature vector of points. Our experiments show that this approach is outperformed by our convolutional method. Sparse convolutional networks are similar to dense volumetric convolutional networks that use a regular grid to discretize space-time, but they are only computed at locations with occupied points. Minkowski networks (Choy et al., 2019) is a sparse convolutional network that models spatio- temporal correlations by concatentating the spatial location and time for each point sample into a 4D tesseract. It is thus sensitive to an appropriate resolution for the discretization since excess sparsity can result in empty neighborhoods and trivial convolutions, and too coarse a resolution may result in an inaccurate representation of the data. Furthermore, the approach has difficulties accounting for the case where points should be treated as moving entities themselves. On the other hand, point convolutions discretize 3D volumetric convolutions on each point directly and hence easily generalize to the entire space under irregular sampling density. Early versions (Simonovsky & Komodakis, 2017; Hermosilla et al., 2018; Wang et al., 2019) require explicit discretization hence cannot scale to large networks. Recently, PointConv (Wu et al., 2019) proposes an equivalent form that avoids explicit discretization and significantly improved scalability. However, so far it has been applied only to static point clouds. Our work builds on PointConv, by extending it in the temporal direction and demonstrating that space-time convolutions can be effectively learned and used for modeling and anomaly detection. On the temporal side, a significant amount of recent state-of-the-art were based on point processes which studies time series models from a statistical perspective (Du et al., 2016; Li et al., 2018a; Zuo et al., 2020; Zhang et al., 2019). These support irregular temporal sampling, but generally do not consider the spatial correlation among points. 3 PROBLEM SETUP We consider extrapolative tasks in which the value at new locations must be inferred from existing observations. Let P be a spatio-temporal point cloud with each individual point pj ∈ P defined as pj = (lj , tj , oj) where pj exists at location lj at time tj and has associated features oj (e.g. temperature and humidity values for a weather station). Further, let Q be a set of query locations at which the model is to make predictions given P . For example, a forecasting model might be given queries qk = (lk, tk) for locations in the future and be tasked with predicting the corresponding features ok representing the desired properties to be predicted. We place no restrictions on the regularity of either P or Q such that this corresponds to a setting where both input and output may be sparse and irregularly sampled through space and time. Further, query points may be in the future, the past, or concurrent with those in P – corresponding to weather forecasting, backcasting, or nowcasting respectively. We aim to train models that can accurately answer queries as represented via training set of point-cloud / query-set pairs D = {(Pi, Qi)}Ni=1. 4 TEMPORAL POINTCONV ARCHITECTURE Given a spatio-temporal point-cloud containing points pj = (lj , tj , oj), a Temporal PointConv layer is an operator that produces an updated point representation p′j = (lj , tj , o ′ j) for each point. The updated feature representation o′j incorporates information from a spatio-temporal neighborhood around pj . This is accomplished by applying two point-based convolutional operators in sequence for each point – first a spatial PointConv over points within a narrow temporal band, and then a temporal PointConv over points within a narrow spatial band. These Temporal PointConv layers can be stacked to arbitrary depth. Below we give background on PointConv and describe our model. 4.1 PRELIMINARIES: POINTCONV PointConv is based on the idea of discretizing continuous convolution on irregularly sampled points: Conv(P,p0;w, d(·, ·)) = ∑ pi∈Nd(p0) 〈w(pi − p0),oi〉 (1) where P is a point cloud with features at each point, w(·) is a vector-valued weight function of the positional difference between a point pi in the neighborhood Nd of a centroid p0, defined by a metric d, and oi is the input features at pi. w(·) can be learned with a neural network (Simonovsky & Komodakis, 2017). PointConv (Wu et al., 2019) introduces an equivalent form so that w does not need to be computed explicitly, saving computation and memory. This approach is flexible since w(·) as a function can apply to any point in the space of P , hence convolution can be computed over any irregularly sampled neighborhoodNd. We note that this even holds when we did not have any feature at p0, since a neighborhood can still be found even in this case and eq. (1) can still be used. Previously, PointConv has only been used in spatial domains in cases where p0 has features associated with it. In this paper we generalize it to spatio-temporal neighborhoods and to p0 that are featureless query points. For expositional clarity, we denote PointConv as an operator that transforms a feature-augmented point-cloud P into a new point-cloud P ′ consisting of points at target locations Q with eq. (1): P ′ = PointConv(P,Q; d(·, ·)), where we will omit Q if Q = P . 4.2 TEMPORAL POINTCONV Given a spatio-temporal point-cloud Pin = {(lj , tj , o(in)j )|j} and set of queries Q, the Temporal PointConv operations considers the relative position from each query to the elements of Pin and their representative features to produce a set of predictions X corresponding to the query set Q. Spatial Convolution. First, each point’s feature is updated based on the spatial neighborhood of temporally co-occurring points. However, as the points may be irregularly spaced in time, there may be no points that precisely co-occur. We instead consider those in a fixed window of time. Thanks to the flexibility of PointConv operations, we describe this by defining the piece-wise distance function: dspatial(pi, pj) = { || li − lj ||2 if |ti − tj | ≤ t ∞ otherwise . (2) We then apply a PointConv operator to update features: Pspatial = PointConv(Pin; dspatial), where each point in Pspatial has updated feature (li, ti, o (s) i ). Temporal Convolution. We then perform an analogous operation through time. We would like to consider the past and future of each point; however, this requires determining correspondence between points through time. If the underlying point-cloud represents static points such as weather stations, this can simply be based on a small spatial window. If the points correspond to known entities that are moving, we instead assume tracking and can use those entity labels to determine temporal neighborhoods each consisting exclusively of a single entity’s samples throughout time. For clarity, we present the distance function for the first case below: dtemporal(pi, pj) = { || ti − tj ||2 if || li − lj ||2 ≤ s ∞ otherwise . (3) Before applying the temporal PointConv, we first apply a residual connection for each point, concatenating the input and spatial features. We denote this as Pres = {(lj , tj , [o(in)j , o (s) j ]) | j} where [·, ·] denotes concatenation. As before, we apply a PointConv operator with kernels defined only over differences in time as: Ptemporal = PointConv(Pres; dtemporal(·, ·)), where Ptemporal = {(lj , tj , o(tmp)j ])|j}. Combined Representation. To compute the final output point-cloud, we concatenate the original, spatial, and temporal representations and transform them through an MLP f such that Pout = {(lj , tj , f([o(in)j , o (s) j , o (tmp) j ]) | j}. (4) We denote multiple stacked layers via P (d+1) = TemporalPointConv(P (d)). 4.3 EXTRAPOLATING TO NEW POINTS After applying one or more layers of Temporal PointConv as described above, we apply one final query PointConv to the latent spatio-temporal point cloud Pout resulting from this encoding process. For this, we define a new problem-dependent query distance function dquery(·, ·), which could be dspatial, dtemporal, or a combination of both. This enables us to calculate a corresponding latent feature y for the each query point. Y = PointConv(Pout, Q; dquery(·, ·)) (5) Finally, we apply an MLP g to transform each latent query representation into a final predictions X = {g(oy)|y ∈ Y } corresponding to the set of queries Q. 5 EXPERIMENTS We consider two problem domains for our experiments which we describe below. Starcraft II. To evaluate TemporalPointConv on entity-based dynamics, we designed a custom Starcraft II scenario in which two opposing armies consisting of random numbers of three distinct unit types are created and then fight on a featureless battlefield. Each episode is allowed to run without any external influence until one team has been eliminated or the time limit expires. This allows us to learn the dynamics of a battle between a large group of units without any confounding factors such as player inputs. We use the PySC2 library (Vinyals et al., 2017) to record regular observations of the game state as each episode plays out. We use these regularly sampled episode histories to generate individual training examples. Specifically, we select a ‘reference timestep’ t within the episode, sample a set of ‘history offsets’ H from a provided history distribution, and a set of ‘query offsets’R from a provided query distribution. We collect unit properties corresponding to these sampled relative time steps to serve as point features. We determine the prediction targets with the same procedure using the sampled query offsets. This procedure is used to sample an arbitrary number of training examples from the set of episode histories by varying the reference timestep t and re-sampling the history and query offsets as desired. Following this procedure on our dataset of 92,802 episodes yields 2.5 million training examples. We define the ‘property loss’ for a unit state prediction as the sum of the mean squared error of each of the unit’s predicted numeric properties (i.e. health, shields, position) and the cross entropy loss of the unit’s predicted categorical properties (i.e. orientation). Similarly, the ‘alive loss’ is the cross entropy loss between the network’s alive/dead prediction values and a flag indicating if the unit was present and alive in the given timestep. We then define the total loss for a set of unit state predictions as the sum of the alive loss for all units and with property loss for every unit that is actually alive at the given timesteps. This additional condition is necessary due to the fact that dead units do not have recorded properties we can use to determine property loss. As PySC2 assigns a unique, consistent ID to each unit which provides perfect tracking across all timesteps, we use an entity-based temporal distance function when instantiating the query PointConv layer for this problem as described in section 4.2 above. Weather Nowcasting. To evaluate the ability of the TemporalPointConv architecture to reason about spatio-temporal dynamics, we derive weather nowcasting problems from a dataset of weather conditions as recorded by weather stations throughout Oklahoma. The original dataset consists of weather sensor readings from each weather station every five minutes throughout the entirety of the year 2008, associated quality metrics for each sensor in each reading, and metadata about each weather station such as its position and local soil properties. 10% of the weather stations are randomly selected to be held out as test stations and excluded from the training process, while the remaining 90% are used to generate problems for training. We derive training problems from the larger dataset by selecting a time point t and randomly selecting 10% of the remaining training stations to be targets. All non-target training station readings and their associated station metadata within the hour preceding t are collected as input weather data. Any sample within the collected data with an associated quality metric indicating a malfunctioning or missing sensor is discarded. Furthermore, we randomly discard an additional 20% of the remaining samples to decrease the level of time synchronization in the input. Following this procedure on our dataset of weather sensor readings results in over 14,000 training examples. The model is then tasked with predicting weather properties at time t for each of the target stations using the provided input data from the preceding hour. Specifically, the networks are evaluated on their ability to predict the relative humidity, air temperature, air pressure, and wind speed at each specified target location. We define the prediction loss as the sum of the mean square error between the network’s prediction for each of these properties and the actual recorded values. Due to the large difference in magnitudes between these readings, normalize each prediction and target measurement value such that the 10th percentile value to 90th percentile value of that measurement within the entire dataset is mapped to the range [0, 1]. This prevents the training process from naturally favoring measurements with a much higher average magnitude than the others. As our queries for this problem are purely spatial, we use the spatial distance function eq.(2) as the query distance function when instantiating the query PointConv layer for this problem. 5.1 BASELINE IMPLEMENTATIONS Set Functions for Time Series & DeepSets. Our Temporal PointConv architecture leverages PointConv as a convolution-equivalent set function. We can evaluate this choice by replacing each PointConv module with a different set function, such as DeepSets (Zaheer et al., 2017a) or Set Functions for Time Series (SeFT) (Horn et al., 2020). Whereas PointConv takes as input a set of point locations and a set of point features, SeFT and DeepSets only consume a single set of features. However, the neighborhood and distance function mechanisms introduced for Temporal PointConv can still be applied. Therefore, we evaluate the other set functions by simply replacing each instance of PointConv(P ) with SeFT ({[li, ti, oi]|i}) or DeepSets({[li, ti, oi]|i}). Minkowski Networks. We evaluate Minkowski networks (Choy et al., 2019) by replacing each spatial-temporal PointConv step with a Minkowski convolution layer that operates on the combined spatio-temporal vector space inhabited by the raw input samples. This necessarily requires discretizing said vector space into a sparse voxel grid. We choose a voxel resolution of 6km for the weather domain, and 0.05 in game units for the starcraft domain. We use nVidia’s MinkowskiEngine codebase to provide the Minkowski convolution implementation. We trained Temporal PointConv (TPC), Set Function for Time Series (SeFT), DeepSets, and Minkowski networks instantiated with the hyperparameter settings described in appendix B on both the Starcraft II and weather nowcasting domains. For the Starcraft II domain, models were trained for one epoch (owing to the massive size of the generated Starcraft II dataset), whereas for weather nowcasting they were trained for 24 epochs. All networks were trained with a cosine learning rate decay with warm restarts configured such that the learning rate cycles from its maximum value to its minimum three times throughout each training run. 5.2 RESULTS Dynamics Prediction Accuracy. To evaluate prediction accuracy, three of each model were trained on both domains. Unless otherwise specified, the Starcraft history distribution was set to be a uniform distribution over [−10,−1] and the query distribution was set to fixed time offsets {1, 2, 4, 7}. Figure 2 shows the validation loss for each model throughout training, and tables 1 and 2 show in detail the average error across each individual query the final trained networks predict for the test datasets. Our results show that TPC is significantly more accurate than the baseline algorithms, es- pecially on the Starcraft II unit state prediction problem. In all cases, the Minkowski network was unable to outperform either of the set function-based models, and in the weather nowcasting domain it consistently failed to find a good solution, as indicated by the loss orders of magnitude higher than the set function approaches. We believe this failure is due to the difficulty of selecting a suitably sized kernel and voxelization resolution for a spatio-temporal problem at the scale of an entire state. We were unable to increase the size of the kernel without driving the network’s parameter count prohibitively high, and we were unable to decrease the resolution of voxelization without starting to ‘lose’ a significant number of weather stations which would be occupying the same cell. This result suggests that applying ‘true’ point cloud convolution that directly exploits sample positions is preferable for these domains, as opposed to discretizing or voxelizing the samples’ locations so that a traditional fixed-size filter convolution such as Minkowski networks can be applied. Impact of Train and Test Distributions. We investigate the robustness of TPC to a change in the distribution of input samples or query points. Since the TPC architecture is completely decoupled from the distribution of the input samples, we can accomplish this comparison by simply defining several distribution types, training a model with each type of input distribution on the Starcraft II domain, and comparing the results after evaluating each trained model across each of the input distribution types selected for evaluation. We selected four input distributions for evaluation: Two ‘fixed’ distributions that always return the same set of time offsets, the uniform distribution over the range [−10, 0], and half of a normal distribution over the range [−10, 0]. Figure 6 visualizes the difference between these distributions, and presents a bar chart plotting the average loss when each model is evaluated on each distribution type. In all cases, the query distribution was kept constant and fixed. The results show that TPC and SeFT trained on fixed distributions perform poorly when evaluated on any distribution it was not trained on, while the Minkowski network suffers much less of a penalty despite worse absolute performance. Alternatively, the networks trained on the uniform and normal distributions suffer much less degradation when switching to different input distributions. The only case with a noticeable performance drop is for networks trained on the normal distribution and evaluated on the uniform distribution, which is unsurprising since the normal distribution is biased toward t = 0. We perform a similar experiment to evaluate the behavior of TPC when trained on different query distributions. Figure 4 visualizes the query distributions selected for training alongside a plot of the average loss for each query by their offset from the reference time (e.g. t = 0). As before, the models trained on fixed distributions only consistently perform well on the exact query points they were trained on, with the model trained on Fixed1 distribution’s prediction error rising sharply as the distance from its small cluster of expected query points increases. In contrast, the model trained on the variable distributions saw a relatively small increase in prediction error, even for query points that are outside of the range of query points it was trained on. This suggests that the ability to train the TemporalPointConv architecture on randomized input and query distributions is key to enabling it to generalize well across timesteps and behave reasonably in off-distribution scenarios. Application to Anomaly Detection. We now consider the utility of our TPC model for anomaly detection, where the goal is to detect which samples in a temporal point cloud are anomalous. We focus on the weather dataset, where anomalies correspond to broken sensors. We introduce anomalies to the set of test station samples by randomly selecting 33% of the stations. For these, we randomly increase or decrease the value of one station property by a factor of 25%. The models are then tasked with predicting each of the test samples’ properties given the preceding hour of weather data. Their prediction error on each individual sample is then used as an anomaly score for detection purposes. As expected based on prior prediction results, TPC significantly outperforms SeFT owing to its superior nowcasting accuracy with an area under receiver-operator curve (AUROC) of 0.927 compared to SeFT’s 0.836. The Minkowski network struggles to perform above chance level. See appendix A for the complete ROC curves. 6 CONCLUSION In this work, we proposed a novel extension to the set function PointConv that enables it to be composed with standard deep learning layers to reason about irregularly sampled spatio-temporal processes and calculate predictions for arbitrary domain-specific queries. We show that TemporalPointConv’s ability to directly consume each sample’s positional and feature data without downsampling or discretization enables it to significantly outperform state of the art sparse convolution algorithms across two complex, meaningfully different domains. Similarly, TemporalPointConv’s equivalence to standard convolution enables it to more efficiently reason about relative spatial and temporal relationships than other set functions which are not endowed with these useful properties. These promising results and TemporalPointConv’s flexible parameterization suggest that it can be effectively applied to a wide range of problems with an irregular structure that prevents most other deep learning approaches from functioning efficiently. A ANOMALY DETECTION ROC CURVES B HYPERPARAMETER SETTINGS C JOINT SPACE-TIME NEIGHBORHOODS Though TemporalPointConv decomposes spatio-temporal processes into separate ‘space’ and ‘time’ neighborhoods, this is not strictly necessary. Space and time could be combined into one single vector space, allowing for a single PointConv layer to jointly consider samples’ spatial and temporal distances to determining their local neighborhood. We investigate this possibility by training TemporalPointConv networks to do exactly that. This requires specifying a space-time distance function which we define as follows: Dst = √ D2s + xD 2 t where Ds and Dt are spatial and temporal distance functions, respectively. x then represents the tradeoff factor that dictates whether distant spatial samples should be favored over temporally distant samples when constructing a neighborhood. Specifically, we test three values for x for these ‘combined’ PointConv models: 0.2. 1, and 5. The results in figure C show that all of the networks with combined spatial-temporal neighborhood functions were outperformed by our approach which considers spatial and temporal relationships separately but sequentially. Additionally, this combined distance function depends on a hyperparamter x which is likely domain-specific and nontrivial to find a good value for. These results validate our decision to treat spatial and temporal distances separately.
1. What is the focus of the paper regarding spatial-temporal point clouds? 2. What are the strengths of the proposed Temporal PointConv model? 3. What are the weaknesses of the paper, particularly concerning its technical contribution? 4. Do you have any questions about the experimental results or the presentation of the paper?
Review
Review This paper studies the problem of modeling spatial-temporal point clouds which are sampled at irregular space and time points. It proposes the Temporal PointConv model which is an extension of the PointConv model (Wu et al., 2019). In particular, PointConv computes a convolution by aggregating the features of nearby points of a point p as the new feature of p. Temporal PointConv extends this by aggregating the features of points near p in both space and time in a two-step process: first weighting the aggregation by the space distance and then weighting the aggregation by the temporal distance. Experiments on two datasets show that Temporal PointConv outperforms the baseline models in prediction accuracy. Pros: The paper studies an interesting topic. Spatial-temporal point cloud modelling has many applications including weather forecasting as shown in the paper. The experimental results are good. The proposed model achieves substantial improvement in terms of the prediction accuracy. The experiments done with a gaming setting (Starcraft II) is interesting. The paper is well written and easy to follow. Cons: The paper proposed a simple and effective model. However, a concern is on its technical contribution. As discussed above, the proposed Temporal PointConv model is a direct extension of the PointConv model. The technical contribution is too little to justify a publication in a top-tier conference. Additional comments: The Abstract claims that the proposed model achieved improved efficiency compared to state-of-the-art baselines, but there is no experimental result reported on model efficiency. Grammar: "do not have recorded properties we can use to determine property loss." => "do not have recorded properties which we can use to determine property loss." Typo: "tables 1 and 2" => "Tables 1 and 2"
ICLR
Title Deep Convolution for Irregularly Sampled Temporal Point Clouds Abstract We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds. The model also easily incorporates the notion of multiple entities in the process. In particular, the model can flexibly answer prediction queries about arbitrary space-time points for different entities regardless of the distribution of the training or test-time data. We present experiments on real-world weather station data and battles between large armies in StarCraft II. The results demonstrate the model’s flexibility in answering a variety of query types and demonstrate improved performance and efficiency compared to state-of-the-art baselines. 1 INTRODUCTION Many real-world problems feature observations that are sparse and irregularly sampled in both space and time. Weather stations scattered across the landscape reporting at variable rates without synchronization; citizen-science applications producing observations at the whim of individuals; or even opportunistic reports of unit positions in search-and-rescue or military operations. These sparse and irregular observations naturally map to a set of discrete space-time points – forming a spatiotemporal point cloud representing the underlying process. Critically, the dynamics of these points are often highly related to the other points in their spatio-temporal neighborhood. Modelling spatio-temporal point clouds is difficult with standard deep networks which assume observations are dense and regular – at every grid location in CNNs, every time step in RNNs, or both for spatio-temporal models like Convolutional LSTMs (Xingjian et al., 2015). While there has been work examining irregularly sampled data through time (Rubanova et al., 2019; Shukla & Marlin, 2018) and in space (Wu et al., 2019), modeling both simultaneously has received little attention (Choy et al., 2019). This is due in part to the difficulty of scaling prior solutions across both space and time. For instance, voxelization followed by sparse convolution (Choy et al., 2019) or dense imputation (Shukla & Marlin, 2018) now face a multiplicative increase in the number of cells. Rather than forcing irregular data into dense representations, an emerging line of research treats spatial point-clouds as first-class citizens (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018). Several works directly extend 2D convolutions to point clouds (Simonovsky & Komodakis, 2017; Wang et al., 2019; Hermosilla et al., 2018), with (Wu et al., 2019) being the first that allows efficient exact computation of convolution with dozens of layers. In this work, we build on this line of research to model spatio-temporal point clouds. Specifically, we extend the work of Wu et al. (2019) with an additional module to reason about point representations through time. Our new model, TemporalPointConv (TPC), is a simple but powerful extension that can learn from an arbitrary number of space-time points. Each layer in TemporalPointConv updates the representation of each point by applying two operators in sequence – one that considers the spatial neighborhood in a narrow temporal window and another that models how this spatial representation changes over time. By factorizing the representation update into separate spatial and temporal operators, we gain significant modeling flexibility. Further, by operating directly on point clouds, we can predict observations at arbitrary space-time, regardless of the distribution of observations. We demonstrate TemporalPointConv on two distinct problems: 1) predicting future states of a custom Starcraft II environment involving battles between variable-sized groups, and 2) predicting the weather at stations distributed throughout the state of Oklahoma. Further, we show the utility of these networks in identifying damaged or anomalous weather sensors after being trained exclusively on the associated prediction problem. The results show that TemporalPointConv outperforms both state of the art set functions and a discrete sparse convolution algorithm in terms of raw performance, ability to detect anomalies, and generalization to previously unseen input and query distributions. 2 RELATED WORK Xingjian et al. (2015) gives an early approach to spatio-temporal modeling via convolution by incorporating a standard convolutional structure into the latent memory of an LSTM. This approach is appropriate for situations where the data is regularly sampled in both space and time, which is different from our setting. Interaction networks (Battaglia et al., 2016) and related approaches allow for modeling sets of interacting objects or points over time, with an original motivation to model physics processes. These models are more flexible in their modeling of spatial relationships among points. However, there is an assumption of uniform temporal sampling, which is violated in our setting. A significant amount of work on spatio-temporal modeling for non-uniform spatial sampling uses Graph Convolutional Networks (GCNs) for modeling spatial interactions. For example, Li et al. (2018b) used a GCN followed by an RNN and Yu et al. (2018) used GCNs for spatial correlation and temporal convolution for temporal correlations. They require sampling at continuous temporal intervals and did not deal with generalization outside the fixed given graph. Rather, our approach generalizes to any spatio-temporal point outside of the training data. Yao et al. (2019) introduces an attention model to deal with dynamic spatial relationships, however this is only possible for the dense CNN version in their paper, whereas their version with irregular spatial sampling utilizes the GCN and shares the same issues with the above GCN approaches. PointNet (Qi et al., 2017a) sparked significant interest in networks for 3D Point cloud processing. A number of networks have been proposed (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018) with the highest performing using either sparse convolutional networks (Graham & van der Maaten, 2018; Choy et al., 2019) or point convolutional networks (Wu et al., 2019; Thomas et al., 2019). Set networks, such as DeepSets (Zaheer et al., 2017b), are similar to PointNet (Qi et al., 2017a) with neither explicitly considering neighborhood information of elements/points, making them less powerful than convolutional methods. Recently, Horn et al. (2020) proposed a set network approach for non-uniform time-series prediction, which encodes time into the feature vector of points. Our experiments show that this approach is outperformed by our convolutional method. Sparse convolutional networks are similar to dense volumetric convolutional networks that use a regular grid to discretize space-time, but they are only computed at locations with occupied points. Minkowski networks (Choy et al., 2019) is a sparse convolutional network that models spatio- temporal correlations by concatentating the spatial location and time for each point sample into a 4D tesseract. It is thus sensitive to an appropriate resolution for the discretization since excess sparsity can result in empty neighborhoods and trivial convolutions, and too coarse a resolution may result in an inaccurate representation of the data. Furthermore, the approach has difficulties accounting for the case where points should be treated as moving entities themselves. On the other hand, point convolutions discretize 3D volumetric convolutions on each point directly and hence easily generalize to the entire space under irregular sampling density. Early versions (Simonovsky & Komodakis, 2017; Hermosilla et al., 2018; Wang et al., 2019) require explicit discretization hence cannot scale to large networks. Recently, PointConv (Wu et al., 2019) proposes an equivalent form that avoids explicit discretization and significantly improved scalability. However, so far it has been applied only to static point clouds. Our work builds on PointConv, by extending it in the temporal direction and demonstrating that space-time convolutions can be effectively learned and used for modeling and anomaly detection. On the temporal side, a significant amount of recent state-of-the-art were based on point processes which studies time series models from a statistical perspective (Du et al., 2016; Li et al., 2018a; Zuo et al., 2020; Zhang et al., 2019). These support irregular temporal sampling, but generally do not consider the spatial correlation among points. 3 PROBLEM SETUP We consider extrapolative tasks in which the value at new locations must be inferred from existing observations. Let P be a spatio-temporal point cloud with each individual point pj ∈ P defined as pj = (lj , tj , oj) where pj exists at location lj at time tj and has associated features oj (e.g. temperature and humidity values for a weather station). Further, let Q be a set of query locations at which the model is to make predictions given P . For example, a forecasting model might be given queries qk = (lk, tk) for locations in the future and be tasked with predicting the corresponding features ok representing the desired properties to be predicted. We place no restrictions on the regularity of either P or Q such that this corresponds to a setting where both input and output may be sparse and irregularly sampled through space and time. Further, query points may be in the future, the past, or concurrent with those in P – corresponding to weather forecasting, backcasting, or nowcasting respectively. We aim to train models that can accurately answer queries as represented via training set of point-cloud / query-set pairs D = {(Pi, Qi)}Ni=1. 4 TEMPORAL POINTCONV ARCHITECTURE Given a spatio-temporal point-cloud containing points pj = (lj , tj , oj), a Temporal PointConv layer is an operator that produces an updated point representation p′j = (lj , tj , o ′ j) for each point. The updated feature representation o′j incorporates information from a spatio-temporal neighborhood around pj . This is accomplished by applying two point-based convolutional operators in sequence for each point – first a spatial PointConv over points within a narrow temporal band, and then a temporal PointConv over points within a narrow spatial band. These Temporal PointConv layers can be stacked to arbitrary depth. Below we give background on PointConv and describe our model. 4.1 PRELIMINARIES: POINTCONV PointConv is based on the idea of discretizing continuous convolution on irregularly sampled points: Conv(P,p0;w, d(·, ·)) = ∑ pi∈Nd(p0) 〈w(pi − p0),oi〉 (1) where P is a point cloud with features at each point, w(·) is a vector-valued weight function of the positional difference between a point pi in the neighborhood Nd of a centroid p0, defined by a metric d, and oi is the input features at pi. w(·) can be learned with a neural network (Simonovsky & Komodakis, 2017). PointConv (Wu et al., 2019) introduces an equivalent form so that w does not need to be computed explicitly, saving computation and memory. This approach is flexible since w(·) as a function can apply to any point in the space of P , hence convolution can be computed over any irregularly sampled neighborhoodNd. We note that this even holds when we did not have any feature at p0, since a neighborhood can still be found even in this case and eq. (1) can still be used. Previously, PointConv has only been used in spatial domains in cases where p0 has features associated with it. In this paper we generalize it to spatio-temporal neighborhoods and to p0 that are featureless query points. For expositional clarity, we denote PointConv as an operator that transforms a feature-augmented point-cloud P into a new point-cloud P ′ consisting of points at target locations Q with eq. (1): P ′ = PointConv(P,Q; d(·, ·)), where we will omit Q if Q = P . 4.2 TEMPORAL POINTCONV Given a spatio-temporal point-cloud Pin = {(lj , tj , o(in)j )|j} and set of queries Q, the Temporal PointConv operations considers the relative position from each query to the elements of Pin and their representative features to produce a set of predictions X corresponding to the query set Q. Spatial Convolution. First, each point’s feature is updated based on the spatial neighborhood of temporally co-occurring points. However, as the points may be irregularly spaced in time, there may be no points that precisely co-occur. We instead consider those in a fixed window of time. Thanks to the flexibility of PointConv operations, we describe this by defining the piece-wise distance function: dspatial(pi, pj) = { || li − lj ||2 if |ti − tj | ≤ t ∞ otherwise . (2) We then apply a PointConv operator to update features: Pspatial = PointConv(Pin; dspatial), where each point in Pspatial has updated feature (li, ti, o (s) i ). Temporal Convolution. We then perform an analogous operation through time. We would like to consider the past and future of each point; however, this requires determining correspondence between points through time. If the underlying point-cloud represents static points such as weather stations, this can simply be based on a small spatial window. If the points correspond to known entities that are moving, we instead assume tracking and can use those entity labels to determine temporal neighborhoods each consisting exclusively of a single entity’s samples throughout time. For clarity, we present the distance function for the first case below: dtemporal(pi, pj) = { || ti − tj ||2 if || li − lj ||2 ≤ s ∞ otherwise . (3) Before applying the temporal PointConv, we first apply a residual connection for each point, concatenating the input and spatial features. We denote this as Pres = {(lj , tj , [o(in)j , o (s) j ]) | j} where [·, ·] denotes concatenation. As before, we apply a PointConv operator with kernels defined only over differences in time as: Ptemporal = PointConv(Pres; dtemporal(·, ·)), where Ptemporal = {(lj , tj , o(tmp)j ])|j}. Combined Representation. To compute the final output point-cloud, we concatenate the original, spatial, and temporal representations and transform them through an MLP f such that Pout = {(lj , tj , f([o(in)j , o (s) j , o (tmp) j ]) | j}. (4) We denote multiple stacked layers via P (d+1) = TemporalPointConv(P (d)). 4.3 EXTRAPOLATING TO NEW POINTS After applying one or more layers of Temporal PointConv as described above, we apply one final query PointConv to the latent spatio-temporal point cloud Pout resulting from this encoding process. For this, we define a new problem-dependent query distance function dquery(·, ·), which could be dspatial, dtemporal, or a combination of both. This enables us to calculate a corresponding latent feature y for the each query point. Y = PointConv(Pout, Q; dquery(·, ·)) (5) Finally, we apply an MLP g to transform each latent query representation into a final predictions X = {g(oy)|y ∈ Y } corresponding to the set of queries Q. 5 EXPERIMENTS We consider two problem domains for our experiments which we describe below. Starcraft II. To evaluate TemporalPointConv on entity-based dynamics, we designed a custom Starcraft II scenario in which two opposing armies consisting of random numbers of three distinct unit types are created and then fight on a featureless battlefield. Each episode is allowed to run without any external influence until one team has been eliminated or the time limit expires. This allows us to learn the dynamics of a battle between a large group of units without any confounding factors such as player inputs. We use the PySC2 library (Vinyals et al., 2017) to record regular observations of the game state as each episode plays out. We use these regularly sampled episode histories to generate individual training examples. Specifically, we select a ‘reference timestep’ t within the episode, sample a set of ‘history offsets’ H from a provided history distribution, and a set of ‘query offsets’R from a provided query distribution. We collect unit properties corresponding to these sampled relative time steps to serve as point features. We determine the prediction targets with the same procedure using the sampled query offsets. This procedure is used to sample an arbitrary number of training examples from the set of episode histories by varying the reference timestep t and re-sampling the history and query offsets as desired. Following this procedure on our dataset of 92,802 episodes yields 2.5 million training examples. We define the ‘property loss’ for a unit state prediction as the sum of the mean squared error of each of the unit’s predicted numeric properties (i.e. health, shields, position) and the cross entropy loss of the unit’s predicted categorical properties (i.e. orientation). Similarly, the ‘alive loss’ is the cross entropy loss between the network’s alive/dead prediction values and a flag indicating if the unit was present and alive in the given timestep. We then define the total loss for a set of unit state predictions as the sum of the alive loss for all units and with property loss for every unit that is actually alive at the given timesteps. This additional condition is necessary due to the fact that dead units do not have recorded properties we can use to determine property loss. As PySC2 assigns a unique, consistent ID to each unit which provides perfect tracking across all timesteps, we use an entity-based temporal distance function when instantiating the query PointConv layer for this problem as described in section 4.2 above. Weather Nowcasting. To evaluate the ability of the TemporalPointConv architecture to reason about spatio-temporal dynamics, we derive weather nowcasting problems from a dataset of weather conditions as recorded by weather stations throughout Oklahoma. The original dataset consists of weather sensor readings from each weather station every five minutes throughout the entirety of the year 2008, associated quality metrics for each sensor in each reading, and metadata about each weather station such as its position and local soil properties. 10% of the weather stations are randomly selected to be held out as test stations and excluded from the training process, while the remaining 90% are used to generate problems for training. We derive training problems from the larger dataset by selecting a time point t and randomly selecting 10% of the remaining training stations to be targets. All non-target training station readings and their associated station metadata within the hour preceding t are collected as input weather data. Any sample within the collected data with an associated quality metric indicating a malfunctioning or missing sensor is discarded. Furthermore, we randomly discard an additional 20% of the remaining samples to decrease the level of time synchronization in the input. Following this procedure on our dataset of weather sensor readings results in over 14,000 training examples. The model is then tasked with predicting weather properties at time t for each of the target stations using the provided input data from the preceding hour. Specifically, the networks are evaluated on their ability to predict the relative humidity, air temperature, air pressure, and wind speed at each specified target location. We define the prediction loss as the sum of the mean square error between the network’s prediction for each of these properties and the actual recorded values. Due to the large difference in magnitudes between these readings, normalize each prediction and target measurement value such that the 10th percentile value to 90th percentile value of that measurement within the entire dataset is mapped to the range [0, 1]. This prevents the training process from naturally favoring measurements with a much higher average magnitude than the others. As our queries for this problem are purely spatial, we use the spatial distance function eq.(2) as the query distance function when instantiating the query PointConv layer for this problem. 5.1 BASELINE IMPLEMENTATIONS Set Functions for Time Series & DeepSets. Our Temporal PointConv architecture leverages PointConv as a convolution-equivalent set function. We can evaluate this choice by replacing each PointConv module with a different set function, such as DeepSets (Zaheer et al., 2017a) or Set Functions for Time Series (SeFT) (Horn et al., 2020). Whereas PointConv takes as input a set of point locations and a set of point features, SeFT and DeepSets only consume a single set of features. However, the neighborhood and distance function mechanisms introduced for Temporal PointConv can still be applied. Therefore, we evaluate the other set functions by simply replacing each instance of PointConv(P ) with SeFT ({[li, ti, oi]|i}) or DeepSets({[li, ti, oi]|i}). Minkowski Networks. We evaluate Minkowski networks (Choy et al., 2019) by replacing each spatial-temporal PointConv step with a Minkowski convolution layer that operates on the combined spatio-temporal vector space inhabited by the raw input samples. This necessarily requires discretizing said vector space into a sparse voxel grid. We choose a voxel resolution of 6km for the weather domain, and 0.05 in game units for the starcraft domain. We use nVidia’s MinkowskiEngine codebase to provide the Minkowski convolution implementation. We trained Temporal PointConv (TPC), Set Function for Time Series (SeFT), DeepSets, and Minkowski networks instantiated with the hyperparameter settings described in appendix B on both the Starcraft II and weather nowcasting domains. For the Starcraft II domain, models were trained for one epoch (owing to the massive size of the generated Starcraft II dataset), whereas for weather nowcasting they were trained for 24 epochs. All networks were trained with a cosine learning rate decay with warm restarts configured such that the learning rate cycles from its maximum value to its minimum three times throughout each training run. 5.2 RESULTS Dynamics Prediction Accuracy. To evaluate prediction accuracy, three of each model were trained on both domains. Unless otherwise specified, the Starcraft history distribution was set to be a uniform distribution over [−10,−1] and the query distribution was set to fixed time offsets {1, 2, 4, 7}. Figure 2 shows the validation loss for each model throughout training, and tables 1 and 2 show in detail the average error across each individual query the final trained networks predict for the test datasets. Our results show that TPC is significantly more accurate than the baseline algorithms, es- pecially on the Starcraft II unit state prediction problem. In all cases, the Minkowski network was unable to outperform either of the set function-based models, and in the weather nowcasting domain it consistently failed to find a good solution, as indicated by the loss orders of magnitude higher than the set function approaches. We believe this failure is due to the difficulty of selecting a suitably sized kernel and voxelization resolution for a spatio-temporal problem at the scale of an entire state. We were unable to increase the size of the kernel without driving the network’s parameter count prohibitively high, and we were unable to decrease the resolution of voxelization without starting to ‘lose’ a significant number of weather stations which would be occupying the same cell. This result suggests that applying ‘true’ point cloud convolution that directly exploits sample positions is preferable for these domains, as opposed to discretizing or voxelizing the samples’ locations so that a traditional fixed-size filter convolution such as Minkowski networks can be applied. Impact of Train and Test Distributions. We investigate the robustness of TPC to a change in the distribution of input samples or query points. Since the TPC architecture is completely decoupled from the distribution of the input samples, we can accomplish this comparison by simply defining several distribution types, training a model with each type of input distribution on the Starcraft II domain, and comparing the results after evaluating each trained model across each of the input distribution types selected for evaluation. We selected four input distributions for evaluation: Two ‘fixed’ distributions that always return the same set of time offsets, the uniform distribution over the range [−10, 0], and half of a normal distribution over the range [−10, 0]. Figure 6 visualizes the difference between these distributions, and presents a bar chart plotting the average loss when each model is evaluated on each distribution type. In all cases, the query distribution was kept constant and fixed. The results show that TPC and SeFT trained on fixed distributions perform poorly when evaluated on any distribution it was not trained on, while the Minkowski network suffers much less of a penalty despite worse absolute performance. Alternatively, the networks trained on the uniform and normal distributions suffer much less degradation when switching to different input distributions. The only case with a noticeable performance drop is for networks trained on the normal distribution and evaluated on the uniform distribution, which is unsurprising since the normal distribution is biased toward t = 0. We perform a similar experiment to evaluate the behavior of TPC when trained on different query distributions. Figure 4 visualizes the query distributions selected for training alongside a plot of the average loss for each query by their offset from the reference time (e.g. t = 0). As before, the models trained on fixed distributions only consistently perform well on the exact query points they were trained on, with the model trained on Fixed1 distribution’s prediction error rising sharply as the distance from its small cluster of expected query points increases. In contrast, the model trained on the variable distributions saw a relatively small increase in prediction error, even for query points that are outside of the range of query points it was trained on. This suggests that the ability to train the TemporalPointConv architecture on randomized input and query distributions is key to enabling it to generalize well across timesteps and behave reasonably in off-distribution scenarios. Application to Anomaly Detection. We now consider the utility of our TPC model for anomaly detection, where the goal is to detect which samples in a temporal point cloud are anomalous. We focus on the weather dataset, where anomalies correspond to broken sensors. We introduce anomalies to the set of test station samples by randomly selecting 33% of the stations. For these, we randomly increase or decrease the value of one station property by a factor of 25%. The models are then tasked with predicting each of the test samples’ properties given the preceding hour of weather data. Their prediction error on each individual sample is then used as an anomaly score for detection purposes. As expected based on prior prediction results, TPC significantly outperforms SeFT owing to its superior nowcasting accuracy with an area under receiver-operator curve (AUROC) of 0.927 compared to SeFT’s 0.836. The Minkowski network struggles to perform above chance level. See appendix A for the complete ROC curves. 6 CONCLUSION In this work, we proposed a novel extension to the set function PointConv that enables it to be composed with standard deep learning layers to reason about irregularly sampled spatio-temporal processes and calculate predictions for arbitrary domain-specific queries. We show that TemporalPointConv’s ability to directly consume each sample’s positional and feature data without downsampling or discretization enables it to significantly outperform state of the art sparse convolution algorithms across two complex, meaningfully different domains. Similarly, TemporalPointConv’s equivalence to standard convolution enables it to more efficiently reason about relative spatial and temporal relationships than other set functions which are not endowed with these useful properties. These promising results and TemporalPointConv’s flexible parameterization suggest that it can be effectively applied to a wide range of problems with an irregular structure that prevents most other deep learning approaches from functioning efficiently. A ANOMALY DETECTION ROC CURVES B HYPERPARAMETER SETTINGS C JOINT SPACE-TIME NEIGHBORHOODS Though TemporalPointConv decomposes spatio-temporal processes into separate ‘space’ and ‘time’ neighborhoods, this is not strictly necessary. Space and time could be combined into one single vector space, allowing for a single PointConv layer to jointly consider samples’ spatial and temporal distances to determining their local neighborhood. We investigate this possibility by training TemporalPointConv networks to do exactly that. This requires specifying a space-time distance function which we define as follows: Dst = √ D2s + xD 2 t where Ds and Dt are spatial and temporal distance functions, respectively. x then represents the tradeoff factor that dictates whether distant spatial samples should be favored over temporally distant samples when constructing a neighborhood. Specifically, we test three values for x for these ‘combined’ PointConv models: 0.2. 1, and 5. The results in figure C show that all of the networks with combined spatial-temporal neighborhood functions were outperformed by our approach which considers spatial and temporal relationships separately but sequentially. Additionally, this combined distance function depends on a hyperparamter x which is likely domain-specific and nontrivial to find a good value for. These results validate our decision to treat spatial and temporal distances separately.
1. What is the focus of the paper, and what are the proposed approaches? 2. What are the strengths of the presented method, particularly in its effectiveness? 3. Do you have any concerns or questions regarding the method's components and their motivation? 4. How does the reviewer assess the significance of the paper's contributions? 5. Are there any limitations or areas for improvement in the proposed approach?
Review
Review In this paper spatio-temporal point convolution are proposed, which can be used for sequences of sparse and unordered data. For this purpose, a spatial convolution and a temporal convolution are applied separately, which are then combined in a further step.The presented method was evaluated with two different data sets, based on Starcraft II and on data from weather stations. The method is conclusive and the effectiveness is shown by interesting evaluations. Nevertheless, in my eyes there are still some open questions: Why not simply use 4D convolution like for example in the paper Meteornet: Deep learning on dynamic 3d point cloud sequences? In general, the motivation of the method and its components is relatively brief. For example, an ablation study, where the importance of the time component is shown, would be advantageous. The data sets consist of sequences with corresponding point data. What about scan data, without any correspondence? It would be interesting to see a test with semantic segmentation (with lidar data) or action estimation (from depth images) to see more general applications of the TPC layer. It is not clear to me why graph convolution networks are not taken into account, in my eyes graph networks would be a good choice for the given data. Recent publications show the abilities of GCNs in similar application areas (e.g. Learning to Simulate Complex Physics with Graph Networks). The paper itself is written relatively clear and understandable, a few little things I noticed: Section 4.3: ... to transform each latent query representation into a final predictions ... 5.2 Paragraph 3: I think the reference to Figure 6 was confused with the reference to Figure 3. Figure 3 and Figure 4: I would leave the scale of the Y-axis of the distribution visualization the same, otherwise it could be misleading. In summary, I find the method interesting, but in my opinion, it still needs some improvements and more elaboration regarding evaluations. Therefore I tend to a reject.
ICLR
Title Deep Convolution for Irregularly Sampled Temporal Point Clouds Abstract We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and predict over this irregularly sampled data, without voxelization, by leveraging a recent convolutional architecture for static point clouds. The model also easily incorporates the notion of multiple entities in the process. In particular, the model can flexibly answer prediction queries about arbitrary space-time points for different entities regardless of the distribution of the training or test-time data. We present experiments on real-world weather station data and battles between large armies in StarCraft II. The results demonstrate the model’s flexibility in answering a variety of query types and demonstrate improved performance and efficiency compared to state-of-the-art baselines. 1 INTRODUCTION Many real-world problems feature observations that are sparse and irregularly sampled in both space and time. Weather stations scattered across the landscape reporting at variable rates without synchronization; citizen-science applications producing observations at the whim of individuals; or even opportunistic reports of unit positions in search-and-rescue or military operations. These sparse and irregular observations naturally map to a set of discrete space-time points – forming a spatiotemporal point cloud representing the underlying process. Critically, the dynamics of these points are often highly related to the other points in their spatio-temporal neighborhood. Modelling spatio-temporal point clouds is difficult with standard deep networks which assume observations are dense and regular – at every grid location in CNNs, every time step in RNNs, or both for spatio-temporal models like Convolutional LSTMs (Xingjian et al., 2015). While there has been work examining irregularly sampled data through time (Rubanova et al., 2019; Shukla & Marlin, 2018) and in space (Wu et al., 2019), modeling both simultaneously has received little attention (Choy et al., 2019). This is due in part to the difficulty of scaling prior solutions across both space and time. For instance, voxelization followed by sparse convolution (Choy et al., 2019) or dense imputation (Shukla & Marlin, 2018) now face a multiplicative increase in the number of cells. Rather than forcing irregular data into dense representations, an emerging line of research treats spatial point-clouds as first-class citizens (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018). Several works directly extend 2D convolutions to point clouds (Simonovsky & Komodakis, 2017; Wang et al., 2019; Hermosilla et al., 2018), with (Wu et al., 2019) being the first that allows efficient exact computation of convolution with dozens of layers. In this work, we build on this line of research to model spatio-temporal point clouds. Specifically, we extend the work of Wu et al. (2019) with an additional module to reason about point representations through time. Our new model, TemporalPointConv (TPC), is a simple but powerful extension that can learn from an arbitrary number of space-time points. Each layer in TemporalPointConv updates the representation of each point by applying two operators in sequence – one that considers the spatial neighborhood in a narrow temporal window and another that models how this spatial representation changes over time. By factorizing the representation update into separate spatial and temporal operators, we gain significant modeling flexibility. Further, by operating directly on point clouds, we can predict observations at arbitrary space-time, regardless of the distribution of observations. We demonstrate TemporalPointConv on two distinct problems: 1) predicting future states of a custom Starcraft II environment involving battles between variable-sized groups, and 2) predicting the weather at stations distributed throughout the state of Oklahoma. Further, we show the utility of these networks in identifying damaged or anomalous weather sensors after being trained exclusively on the associated prediction problem. The results show that TemporalPointConv outperforms both state of the art set functions and a discrete sparse convolution algorithm in terms of raw performance, ability to detect anomalies, and generalization to previously unseen input and query distributions. 2 RELATED WORK Xingjian et al. (2015) gives an early approach to spatio-temporal modeling via convolution by incorporating a standard convolutional structure into the latent memory of an LSTM. This approach is appropriate for situations where the data is regularly sampled in both space and time, which is different from our setting. Interaction networks (Battaglia et al., 2016) and related approaches allow for modeling sets of interacting objects or points over time, with an original motivation to model physics processes. These models are more flexible in their modeling of spatial relationships among points. However, there is an assumption of uniform temporal sampling, which is violated in our setting. A significant amount of work on spatio-temporal modeling for non-uniform spatial sampling uses Graph Convolutional Networks (GCNs) for modeling spatial interactions. For example, Li et al. (2018b) used a GCN followed by an RNN and Yu et al. (2018) used GCNs for spatial correlation and temporal convolution for temporal correlations. They require sampling at continuous temporal intervals and did not deal with generalization outside the fixed given graph. Rather, our approach generalizes to any spatio-temporal point outside of the training data. Yao et al. (2019) introduces an attention model to deal with dynamic spatial relationships, however this is only possible for the dense CNN version in their paper, whereas their version with irregular spatial sampling utilizes the GCN and shares the same issues with the above GCN approaches. PointNet (Qi et al., 2017a) sparked significant interest in networks for 3D Point cloud processing. A number of networks have been proposed (Qi et al., 2017a;b; Su et al., 2018; Xu et al., 2018) with the highest performing using either sparse convolutional networks (Graham & van der Maaten, 2018; Choy et al., 2019) or point convolutional networks (Wu et al., 2019; Thomas et al., 2019). Set networks, such as DeepSets (Zaheer et al., 2017b), are similar to PointNet (Qi et al., 2017a) with neither explicitly considering neighborhood information of elements/points, making them less powerful than convolutional methods. Recently, Horn et al. (2020) proposed a set network approach for non-uniform time-series prediction, which encodes time into the feature vector of points. Our experiments show that this approach is outperformed by our convolutional method. Sparse convolutional networks are similar to dense volumetric convolutional networks that use a regular grid to discretize space-time, but they are only computed at locations with occupied points. Minkowski networks (Choy et al., 2019) is a sparse convolutional network that models spatio- temporal correlations by concatentating the spatial location and time for each point sample into a 4D tesseract. It is thus sensitive to an appropriate resolution for the discretization since excess sparsity can result in empty neighborhoods and trivial convolutions, and too coarse a resolution may result in an inaccurate representation of the data. Furthermore, the approach has difficulties accounting for the case where points should be treated as moving entities themselves. On the other hand, point convolutions discretize 3D volumetric convolutions on each point directly and hence easily generalize to the entire space under irregular sampling density. Early versions (Simonovsky & Komodakis, 2017; Hermosilla et al., 2018; Wang et al., 2019) require explicit discretization hence cannot scale to large networks. Recently, PointConv (Wu et al., 2019) proposes an equivalent form that avoids explicit discretization and significantly improved scalability. However, so far it has been applied only to static point clouds. Our work builds on PointConv, by extending it in the temporal direction and demonstrating that space-time convolutions can be effectively learned and used for modeling and anomaly detection. On the temporal side, a significant amount of recent state-of-the-art were based on point processes which studies time series models from a statistical perspective (Du et al., 2016; Li et al., 2018a; Zuo et al., 2020; Zhang et al., 2019). These support irregular temporal sampling, but generally do not consider the spatial correlation among points. 3 PROBLEM SETUP We consider extrapolative tasks in which the value at new locations must be inferred from existing observations. Let P be a spatio-temporal point cloud with each individual point pj ∈ P defined as pj = (lj , tj , oj) where pj exists at location lj at time tj and has associated features oj (e.g. temperature and humidity values for a weather station). Further, let Q be a set of query locations at which the model is to make predictions given P . For example, a forecasting model might be given queries qk = (lk, tk) for locations in the future and be tasked with predicting the corresponding features ok representing the desired properties to be predicted. We place no restrictions on the regularity of either P or Q such that this corresponds to a setting where both input and output may be sparse and irregularly sampled through space and time. Further, query points may be in the future, the past, or concurrent with those in P – corresponding to weather forecasting, backcasting, or nowcasting respectively. We aim to train models that can accurately answer queries as represented via training set of point-cloud / query-set pairs D = {(Pi, Qi)}Ni=1. 4 TEMPORAL POINTCONV ARCHITECTURE Given a spatio-temporal point-cloud containing points pj = (lj , tj , oj), a Temporal PointConv layer is an operator that produces an updated point representation p′j = (lj , tj , o ′ j) for each point. The updated feature representation o′j incorporates information from a spatio-temporal neighborhood around pj . This is accomplished by applying two point-based convolutional operators in sequence for each point – first a spatial PointConv over points within a narrow temporal band, and then a temporal PointConv over points within a narrow spatial band. These Temporal PointConv layers can be stacked to arbitrary depth. Below we give background on PointConv and describe our model. 4.1 PRELIMINARIES: POINTCONV PointConv is based on the idea of discretizing continuous convolution on irregularly sampled points: Conv(P,p0;w, d(·, ·)) = ∑ pi∈Nd(p0) 〈w(pi − p0),oi〉 (1) where P is a point cloud with features at each point, w(·) is a vector-valued weight function of the positional difference between a point pi in the neighborhood Nd of a centroid p0, defined by a metric d, and oi is the input features at pi. w(·) can be learned with a neural network (Simonovsky & Komodakis, 2017). PointConv (Wu et al., 2019) introduces an equivalent form so that w does not need to be computed explicitly, saving computation and memory. This approach is flexible since w(·) as a function can apply to any point in the space of P , hence convolution can be computed over any irregularly sampled neighborhoodNd. We note that this even holds when we did not have any feature at p0, since a neighborhood can still be found even in this case and eq. (1) can still be used. Previously, PointConv has only been used in spatial domains in cases where p0 has features associated with it. In this paper we generalize it to spatio-temporal neighborhoods and to p0 that are featureless query points. For expositional clarity, we denote PointConv as an operator that transforms a feature-augmented point-cloud P into a new point-cloud P ′ consisting of points at target locations Q with eq. (1): P ′ = PointConv(P,Q; d(·, ·)), where we will omit Q if Q = P . 4.2 TEMPORAL POINTCONV Given a spatio-temporal point-cloud Pin = {(lj , tj , o(in)j )|j} and set of queries Q, the Temporal PointConv operations considers the relative position from each query to the elements of Pin and their representative features to produce a set of predictions X corresponding to the query set Q. Spatial Convolution. First, each point’s feature is updated based on the spatial neighborhood of temporally co-occurring points. However, as the points may be irregularly spaced in time, there may be no points that precisely co-occur. We instead consider those in a fixed window of time. Thanks to the flexibility of PointConv operations, we describe this by defining the piece-wise distance function: dspatial(pi, pj) = { || li − lj ||2 if |ti − tj | ≤ t ∞ otherwise . (2) We then apply a PointConv operator to update features: Pspatial = PointConv(Pin; dspatial), where each point in Pspatial has updated feature (li, ti, o (s) i ). Temporal Convolution. We then perform an analogous operation through time. We would like to consider the past and future of each point; however, this requires determining correspondence between points through time. If the underlying point-cloud represents static points such as weather stations, this can simply be based on a small spatial window. If the points correspond to known entities that are moving, we instead assume tracking and can use those entity labels to determine temporal neighborhoods each consisting exclusively of a single entity’s samples throughout time. For clarity, we present the distance function for the first case below: dtemporal(pi, pj) = { || ti − tj ||2 if || li − lj ||2 ≤ s ∞ otherwise . (3) Before applying the temporal PointConv, we first apply a residual connection for each point, concatenating the input and spatial features. We denote this as Pres = {(lj , tj , [o(in)j , o (s) j ]) | j} where [·, ·] denotes concatenation. As before, we apply a PointConv operator with kernels defined only over differences in time as: Ptemporal = PointConv(Pres; dtemporal(·, ·)), where Ptemporal = {(lj , tj , o(tmp)j ])|j}. Combined Representation. To compute the final output point-cloud, we concatenate the original, spatial, and temporal representations and transform them through an MLP f such that Pout = {(lj , tj , f([o(in)j , o (s) j , o (tmp) j ]) | j}. (4) We denote multiple stacked layers via P (d+1) = TemporalPointConv(P (d)). 4.3 EXTRAPOLATING TO NEW POINTS After applying one or more layers of Temporal PointConv as described above, we apply one final query PointConv to the latent spatio-temporal point cloud Pout resulting from this encoding process. For this, we define a new problem-dependent query distance function dquery(·, ·), which could be dspatial, dtemporal, or a combination of both. This enables us to calculate a corresponding latent feature y for the each query point. Y = PointConv(Pout, Q; dquery(·, ·)) (5) Finally, we apply an MLP g to transform each latent query representation into a final predictions X = {g(oy)|y ∈ Y } corresponding to the set of queries Q. 5 EXPERIMENTS We consider two problem domains for our experiments which we describe below. Starcraft II. To evaluate TemporalPointConv on entity-based dynamics, we designed a custom Starcraft II scenario in which two opposing armies consisting of random numbers of three distinct unit types are created and then fight on a featureless battlefield. Each episode is allowed to run without any external influence until one team has been eliminated or the time limit expires. This allows us to learn the dynamics of a battle between a large group of units without any confounding factors such as player inputs. We use the PySC2 library (Vinyals et al., 2017) to record regular observations of the game state as each episode plays out. We use these regularly sampled episode histories to generate individual training examples. Specifically, we select a ‘reference timestep’ t within the episode, sample a set of ‘history offsets’ H from a provided history distribution, and a set of ‘query offsets’R from a provided query distribution. We collect unit properties corresponding to these sampled relative time steps to serve as point features. We determine the prediction targets with the same procedure using the sampled query offsets. This procedure is used to sample an arbitrary number of training examples from the set of episode histories by varying the reference timestep t and re-sampling the history and query offsets as desired. Following this procedure on our dataset of 92,802 episodes yields 2.5 million training examples. We define the ‘property loss’ for a unit state prediction as the sum of the mean squared error of each of the unit’s predicted numeric properties (i.e. health, shields, position) and the cross entropy loss of the unit’s predicted categorical properties (i.e. orientation). Similarly, the ‘alive loss’ is the cross entropy loss between the network’s alive/dead prediction values and a flag indicating if the unit was present and alive in the given timestep. We then define the total loss for a set of unit state predictions as the sum of the alive loss for all units and with property loss for every unit that is actually alive at the given timesteps. This additional condition is necessary due to the fact that dead units do not have recorded properties we can use to determine property loss. As PySC2 assigns a unique, consistent ID to each unit which provides perfect tracking across all timesteps, we use an entity-based temporal distance function when instantiating the query PointConv layer for this problem as described in section 4.2 above. Weather Nowcasting. To evaluate the ability of the TemporalPointConv architecture to reason about spatio-temporal dynamics, we derive weather nowcasting problems from a dataset of weather conditions as recorded by weather stations throughout Oklahoma. The original dataset consists of weather sensor readings from each weather station every five minutes throughout the entirety of the year 2008, associated quality metrics for each sensor in each reading, and metadata about each weather station such as its position and local soil properties. 10% of the weather stations are randomly selected to be held out as test stations and excluded from the training process, while the remaining 90% are used to generate problems for training. We derive training problems from the larger dataset by selecting a time point t and randomly selecting 10% of the remaining training stations to be targets. All non-target training station readings and their associated station metadata within the hour preceding t are collected as input weather data. Any sample within the collected data with an associated quality metric indicating a malfunctioning or missing sensor is discarded. Furthermore, we randomly discard an additional 20% of the remaining samples to decrease the level of time synchronization in the input. Following this procedure on our dataset of weather sensor readings results in over 14,000 training examples. The model is then tasked with predicting weather properties at time t for each of the target stations using the provided input data from the preceding hour. Specifically, the networks are evaluated on their ability to predict the relative humidity, air temperature, air pressure, and wind speed at each specified target location. We define the prediction loss as the sum of the mean square error between the network’s prediction for each of these properties and the actual recorded values. Due to the large difference in magnitudes between these readings, normalize each prediction and target measurement value such that the 10th percentile value to 90th percentile value of that measurement within the entire dataset is mapped to the range [0, 1]. This prevents the training process from naturally favoring measurements with a much higher average magnitude than the others. As our queries for this problem are purely spatial, we use the spatial distance function eq.(2) as the query distance function when instantiating the query PointConv layer for this problem. 5.1 BASELINE IMPLEMENTATIONS Set Functions for Time Series & DeepSets. Our Temporal PointConv architecture leverages PointConv as a convolution-equivalent set function. We can evaluate this choice by replacing each PointConv module with a different set function, such as DeepSets (Zaheer et al., 2017a) or Set Functions for Time Series (SeFT) (Horn et al., 2020). Whereas PointConv takes as input a set of point locations and a set of point features, SeFT and DeepSets only consume a single set of features. However, the neighborhood and distance function mechanisms introduced for Temporal PointConv can still be applied. Therefore, we evaluate the other set functions by simply replacing each instance of PointConv(P ) with SeFT ({[li, ti, oi]|i}) or DeepSets({[li, ti, oi]|i}). Minkowski Networks. We evaluate Minkowski networks (Choy et al., 2019) by replacing each spatial-temporal PointConv step with a Minkowski convolution layer that operates on the combined spatio-temporal vector space inhabited by the raw input samples. This necessarily requires discretizing said vector space into a sparse voxel grid. We choose a voxel resolution of 6km for the weather domain, and 0.05 in game units for the starcraft domain. We use nVidia’s MinkowskiEngine codebase to provide the Minkowski convolution implementation. We trained Temporal PointConv (TPC), Set Function for Time Series (SeFT), DeepSets, and Minkowski networks instantiated with the hyperparameter settings described in appendix B on both the Starcraft II and weather nowcasting domains. For the Starcraft II domain, models were trained for one epoch (owing to the massive size of the generated Starcraft II dataset), whereas for weather nowcasting they were trained for 24 epochs. All networks were trained with a cosine learning rate decay with warm restarts configured such that the learning rate cycles from its maximum value to its minimum three times throughout each training run. 5.2 RESULTS Dynamics Prediction Accuracy. To evaluate prediction accuracy, three of each model were trained on both domains. Unless otherwise specified, the Starcraft history distribution was set to be a uniform distribution over [−10,−1] and the query distribution was set to fixed time offsets {1, 2, 4, 7}. Figure 2 shows the validation loss for each model throughout training, and tables 1 and 2 show in detail the average error across each individual query the final trained networks predict for the test datasets. Our results show that TPC is significantly more accurate than the baseline algorithms, es- pecially on the Starcraft II unit state prediction problem. In all cases, the Minkowski network was unable to outperform either of the set function-based models, and in the weather nowcasting domain it consistently failed to find a good solution, as indicated by the loss orders of magnitude higher than the set function approaches. We believe this failure is due to the difficulty of selecting a suitably sized kernel and voxelization resolution for a spatio-temporal problem at the scale of an entire state. We were unable to increase the size of the kernel without driving the network’s parameter count prohibitively high, and we were unable to decrease the resolution of voxelization without starting to ‘lose’ a significant number of weather stations which would be occupying the same cell. This result suggests that applying ‘true’ point cloud convolution that directly exploits sample positions is preferable for these domains, as opposed to discretizing or voxelizing the samples’ locations so that a traditional fixed-size filter convolution such as Minkowski networks can be applied. Impact of Train and Test Distributions. We investigate the robustness of TPC to a change in the distribution of input samples or query points. Since the TPC architecture is completely decoupled from the distribution of the input samples, we can accomplish this comparison by simply defining several distribution types, training a model with each type of input distribution on the Starcraft II domain, and comparing the results after evaluating each trained model across each of the input distribution types selected for evaluation. We selected four input distributions for evaluation: Two ‘fixed’ distributions that always return the same set of time offsets, the uniform distribution over the range [−10, 0], and half of a normal distribution over the range [−10, 0]. Figure 6 visualizes the difference between these distributions, and presents a bar chart plotting the average loss when each model is evaluated on each distribution type. In all cases, the query distribution was kept constant and fixed. The results show that TPC and SeFT trained on fixed distributions perform poorly when evaluated on any distribution it was not trained on, while the Minkowski network suffers much less of a penalty despite worse absolute performance. Alternatively, the networks trained on the uniform and normal distributions suffer much less degradation when switching to different input distributions. The only case with a noticeable performance drop is for networks trained on the normal distribution and evaluated on the uniform distribution, which is unsurprising since the normal distribution is biased toward t = 0. We perform a similar experiment to evaluate the behavior of TPC when trained on different query distributions. Figure 4 visualizes the query distributions selected for training alongside a plot of the average loss for each query by their offset from the reference time (e.g. t = 0). As before, the models trained on fixed distributions only consistently perform well on the exact query points they were trained on, with the model trained on Fixed1 distribution’s prediction error rising sharply as the distance from its small cluster of expected query points increases. In contrast, the model trained on the variable distributions saw a relatively small increase in prediction error, even for query points that are outside of the range of query points it was trained on. This suggests that the ability to train the TemporalPointConv architecture on randomized input and query distributions is key to enabling it to generalize well across timesteps and behave reasonably in off-distribution scenarios. Application to Anomaly Detection. We now consider the utility of our TPC model for anomaly detection, where the goal is to detect which samples in a temporal point cloud are anomalous. We focus on the weather dataset, where anomalies correspond to broken sensors. We introduce anomalies to the set of test station samples by randomly selecting 33% of the stations. For these, we randomly increase or decrease the value of one station property by a factor of 25%. The models are then tasked with predicting each of the test samples’ properties given the preceding hour of weather data. Their prediction error on each individual sample is then used as an anomaly score for detection purposes. As expected based on prior prediction results, TPC significantly outperforms SeFT owing to its superior nowcasting accuracy with an area under receiver-operator curve (AUROC) of 0.927 compared to SeFT’s 0.836. The Minkowski network struggles to perform above chance level. See appendix A for the complete ROC curves. 6 CONCLUSION In this work, we proposed a novel extension to the set function PointConv that enables it to be composed with standard deep learning layers to reason about irregularly sampled spatio-temporal processes and calculate predictions for arbitrary domain-specific queries. We show that TemporalPointConv’s ability to directly consume each sample’s positional and feature data without downsampling or discretization enables it to significantly outperform state of the art sparse convolution algorithms across two complex, meaningfully different domains. Similarly, TemporalPointConv’s equivalence to standard convolution enables it to more efficiently reason about relative spatial and temporal relationships than other set functions which are not endowed with these useful properties. These promising results and TemporalPointConv’s flexible parameterization suggest that it can be effectively applied to a wide range of problems with an irregular structure that prevents most other deep learning approaches from functioning efficiently. A ANOMALY DETECTION ROC CURVES B HYPERPARAMETER SETTINGS C JOINT SPACE-TIME NEIGHBORHOODS Though TemporalPointConv decomposes spatio-temporal processes into separate ‘space’ and ‘time’ neighborhoods, this is not strictly necessary. Space and time could be combined into one single vector space, allowing for a single PointConv layer to jointly consider samples’ spatial and temporal distances to determining their local neighborhood. We investigate this possibility by training TemporalPointConv networks to do exactly that. This requires specifying a space-time distance function which we define as follows: Dst = √ D2s + xD 2 t where Ds and Dt are spatial and temporal distance functions, respectively. x then represents the tradeoff factor that dictates whether distant spatial samples should be favored over temporally distant samples when constructing a neighborhood. Specifically, we test three values for x for these ‘combined’ PointConv models: 0.2. 1, and 5. The results in figure C show that all of the networks with combined spatial-temporal neighborhood functions were outperformed by our approach which considers spatial and temporal relationships separately but sequentially. Additionally, this combined distance function depends on a hyperparamter x which is likely domain-specific and nontrivial to find a good value for. These results validate our decision to treat spatial and temporal distances separately.
1. What is the focus of the paper, and how does it extend previous work in PointConv? 2. What are the limitations of the proposed approach, particularly in its application and evaluation? 3. How does the reviewer assess the novelty and significance of the paper's contribution? 4. Are there any concerns regarding the choice of operation used in the proposed method? 5. What are some potential applications of the proposed method that the reviewer finds more appealing?
Review
Review The paper proposes an extension of PointConv for spatial-temporal point cloud modeling. The model can be used for prediction or forecasting and is evaluated on Starcraft II and weather nowcasting. The TemporalPointConv follows PointConv and the current work extends this by appending time. I think the paper is heavily based on PointConv which makes the overall novelty limited. The temporal convolution either does not track points or relies on entity labels for tracking. As in real and most point-cloud scenarios point labels are not available, the temporal convolution restricts the applications of the proposed method. Starcraft II and weather nowcasting are not common applications of point cloud and I thus feel the evaluation is not convincing enough. It would be more appealing to apply the method to other applications such as autonomous driving cars. Making weather nowcasting via point clouds has been studied [a]. Discussion and comparison with related works are missing. [a] CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting I think the idea of appending time dimension can also be applied to PointNet++, KPConv, or other operations. Why do we choose PointConv? Is there any problem if the proposed method is based on other operations? =========Post Rebuttal========== Because no response is provided, I maintain my original rating.
ICLR
Title Connecting the Dots Between MLE and RL for Sequence Prediction Abstract Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation. The difference between them is characterized by the reward function and two weight hyperparameters. The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design. The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way. Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach. 1 INTRODUCTION Sequence prediction problem is ubiquitous in many applications, such as generating a sequence of words for machine translation (Wu et al., 2016; Sutskever et al., 2014), text summarization (Hovy & Lin, 1998; Rush et al., 2015), and image captioning (Vinyals et al., 2015; Karpathy & Fei-Fei, 2015), or taking a sequence of actions to complete a task. In these problems (e.g., Mnih et al., 2015; Ho & Ermon, 2016), we are often given a set of sequence examples, from which we want to learn a model that sequentially makes the next prediction (e.g., generating the next token) given the current state (e.g., the previous tokens). A standard training algorithm is based on supervised learning which seeks to maximize the loglikelihood of example sequences (i.e., maximum likelihood estimation, MLE). Despite the computational simplicity and efficiency, MLE training can suffer from compounding error (Ranzato et al., 2016; Ross & Bagnell, 2010) in that mistakes at test time accumulate along the way and lead to states far from the training data. Another line of approaches overcome the training/test discrepancy issue by resorting to the reinforcement learning (RL) techniques (Ranzato et al., 2016; Bahdanau et al., 2017; Rennie et al., 2017). For example, Ranzato et al. (2016) used policy gradient (Sutton et al., 2000) to train a text generation model with the task metric (e.g., BLEU) as reward. However, RL-based approaches can face challenges of prohibitively poor sample efficiency and high variance. To this end, a diverse set of methods has been developed that is in a middle ground between the two paradigms of MLE and RL. For example, RAML (Norouzi et al., 2016) adds reward-aware perturbation to the MLE data examples; SPG (Ding & Soricut, 2017) leverages reward distribution for effective sampling of policy gradient. Other approaches such as data noising (Xie et al., 2017) also show improved results. In this paper, we establish a unifying perspective of the above distinct learning algorithms. Specifically, we present a generalized entropy regularized policy optimization framework, and show that the diverse algorithms, such as MLE, RAML, data noising, and SPG, can all be re-formulated as special cases of the framework, with the only difference being the choice of reward and the values of two weight hyperparameters (Figure 1). In particular, we show MLE is equivalent to using a RAML Delta-function reward which returns 1 to model samples that match training examples exactly, and −∞ to any other samples. Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding brittle prediction behaviors. Other algorithms essentially use various locally-relaxed rewards, joint with the model distribution, for broader (and more costly) exploration during training. Besides the new views of the existing algorithms, the unifying perspective also leads to new algorithms for improved learning. We develop interpolation algorithm, which, as training proceeds, gradually expands the exploration space by annealing both the reward function and the weight hyperparameters. The annealing in effect dynamically interpolates among the existing algorithms from left to right in Figure 1. We conduct experiments on the tasks of text generation including machine translation and text summarization, and game imitation learning. The interpolation algorithm shows superior performance over various previous methods. 2 RELATED WORK Given a set of data examples, sequence prediction models are usually trained to maximize the loglikelihood of the next label (token, action) conditioning on the current state observed in the data. Reinforcement learning (RL) addresses the discrepancy between training and test by also using models’ own predictions at training time. Various RL approaches have been applied for sequence generation, such as policy gradient (Ranzato et al., 2016) and actor-critic (Bahdanau et al., 2017). Reward augmented maximum likelihood (RAML) (Norouzi et al., 2016) is an algorithm in between MLE and policy gradient. Mathematically, RAML shows that MLE and maximum-entropy policy gradient are respectively minimizing KL divergences in opposite directions. Koyamada et al. (2018) thus propose to use the more general α-divergence as a combination of the two paradigms. Our framework is developed in a different perspective, reformulates a different and more comprehensive set of algorithms, and leads to new insights in terms of exploration and learning efficiency of the various algorithms. Besides the algorithms discussed in the paper, there are other learning methods for sequence models. For example, Hal Daumé et al. (2009); Leblond et al. (2018); Wiseman & Rush (2016) use a learning-to-search paradigm for sequence generation or structured prediction. Scheduled Sampling (Bengio et al., 2015) and variants (Zhang et al., 2019) adapt MLE by randomly replacing ground-truth tokens with model predictions as the input for decoding the next-step token. Policy optimization for reinforcement learning is studied extensively in robotic and game environment. For example, Peters et al. (2010) introduce a relative entropy regularization to reduce information loss during learning. Schulman et al. (2015) develop a trust-region approach for monotonic improvement. Dayan & Hinton (1997); Levine (2018); Abdolmaleki et al. (2018) study the policy optimization algorithms in a probabilistic inference perspective. Zhu et al. (2018) combine imitation learning with RL, whose approach is orthogonal to ours and can be plugged into our framework to incorporate imitation reward. The entropy-regularized policy optimization formulation presented here can be seen as a generalization of many of the previous policy optimization methods. Besides, we formulate the framework primarily in the sequence generation context. 3 CONNECTING THE DOTS We first present a generalized formalism of entropy regularized policy optimization. The formulation contains a reward function and two weight hyperparameters that define the learning procedure. Therefore, varying the values of the reward and weights result in a large space of algorithms. We show that several existing popular algorithms, which were originally proposed in distinct perspectives, can all be seen as members in the space. In particular, we reformulate the MLE algorithm in the same policy optimization form, which enables side-by-side comparison between the broad spectrum of algorithms. The resulting unifying view provides new insights into the exploration and computation efficiency, and creates improved learning approaches for sequence prediction. For clarity, we present the framework in the sequence generation context. The formulations can straightforwardly be extended to other settings such as imitation learning in robotic and game environments, as discussed briefly at the end of this section and also shown in the experiment. We first establish the basic notations. Let y = (y1, . . . , yT ) be the sequence of T tokens. Let y∗ be a training example drawn from the empirical data distribution. From the sequence examples, we aim to learn a sequence generation model pθ(y) = ∏ t pθ(yt|y1:t−1) with parameters θ. Note that generation of y can condition on other factors. For example, in machine translation, y is the sentence in target language and depends on an input sentence in source language. For simplicity of notations, we omit the conditioning factors. 3.1 ENTROPY REGULARIZED POLICY OPTIMIZATION (ERPO) Policy optimization is a family of reinforcement learning (RL) algorithms. Assume a reward functionR(y|y∗) ∈ R that evaluates the quality of generation y against the true y∗. For example, BLEU score (Papineni et al., 2002) can be a reward in machine translation. The general goal of policy optimization is to learn the model pθ(y) (a.k.a policy)1 to maximize the expected reward. Previous work develops entropy regularized approaches, which augment the objective with information theoretic regularizers for stabilized training. We present a generalized variational formulation of ERPO, which, as we show shortly, has the power of subsuming an array of other popular algorithms. Specifically, we introduce a non-parametric variational distribution q(y) w.r.t the model pθ(y). The objective to maximize is as follows: L(q,θ) = Eq [R(y|y∗)]− αKL ( q(y)‖pθ(y) ) + βH(q), (1) where KL(·‖·) is the Kullback–Leibler divergence forcing q to stay close to pθ; H(·) is the Shannon entropy imposing maximum entropy assumption on q; and α and β are balancing weights of the respective terms. Intuitively, the objective is to maximize the expected reward under the variational distribution q while minimizing the distance between q and the model pθ, with maximum entropy regularization on q. The above formulation is relevant to and can be seen as a variant of previous policy optimization approaches in RL literature, such as relative entropy policy search (Peters et al., 2010), maximum entropy policy gradient (Ziebart, 2010; Haarnoja et al., 2017), and other work where the variational distribution q is formulated either as a non-parametric distribution as ours (Abdolmaleki et al., 2018; Peters et al., 2010) or parametric one (Schulman et al., 2015; 2017a; Teh et al., 2017). The objective can be maximized with a standard EM procedure (Neal & Hinton, 1998) that iterates two coordinate ascent steps optimizing q and θ, respectively. At iteration n: E-step: qn+1(y) ∝ exp { α log pθn(y) +R(y|y∗) α+ β } , M-step: θn+1 = arg maxθ Eqn+1 [ log pθ(y) ] . (2) In the E-step, q has a closed-form solution, which is an energy-based distribution. We can have an intuitive interpretation of its form. First, it is clear to see that if α → ∞, we have qn+1 = pnθ . This is also reflected in the objective Eq.(1) where a larger weight α encourages q to be close to pθ. Second, the weight β serves as the temperature of the q softmax distribution. In particular, a large temperature β → ∞ makes q a uniform distribution, which is consistent with the outcome of an infinitely large maximum entropy regularization in Eq.(1). In the M-step, the update rule can be interpreted as maximizing the log-likelihood of samples from the distribution q. 1In the following, we will use the term “model” and “policy” exchangeably. Token-level Formulation In the context of sequence generation, it is sometimes more convenient to express the equations at token level (instead of the sequence level), as shown when we devise a new algorithm in the next section. To this end, we decompose R(y|y∗) along the time steps: R(y|y∗) = ∑ t R(y1:t|y∗)−R(y1:t−1|y∗) := ∑ t ∆R(yt|y1:t−1,y∗), (3) where ∆R(yt|y∗,y1:t−1) measures the reward contributed by token yt. The solution of q in Eq.(2) can then be re-written as: qn+1(y) ∝ ∏ t exp { α log pθn(yt|y1:t−1) + ∆R(yt|y1:t−1,y∗) α+ β } . (4) The Algorithm Space The above ERPO formalism includes three key components, namely, the reward R and the weight hyperparameters α and β > 0. Variation in these components can result in different procedures of updating the model. In other words, different algorithms in the ERPO family correspond to a point (or a region) in the space spanned by the three components. The following sections visit a set of existing approaches, and connect them to the unifying picture by reformulating their seemingly distinct objectives. Figure 1 illustrates the particular algorithms in the space, clustered by the exploration behavior in learning, of which we will discuss more. Softmax Policy Gradient (SPG) We first briefly discuss the previous RL algorithms for sequence prediction that fit in the ERPO formalism. SPG (Ding & Soricut, 2017) was originally developed in the perspective of combining the rewardR and policy pθ to improve sampling quality. The algorithm is equivalent to setting β = 0 and treating α > 0 as the temperature of the energy-based distribution q(y). That is, q(y) in the E-step of Eq.(2) is now in the form q(y) ∝ pθ(y) exp{R(y|y∗)/α}. The reward R is set to any normal task-specific reward. Note that sampling from q(y) (e.g., in the M-step) is typically difficult due to its energy-based form and the fact that the task reward R often does not have particular structures amenable for sampling. We will see in the next section that the MLE algorithm in contrast uses a special reward to avoid the computational difficulty in sampling, at the cost of restricted exploration during training. We also note the previous work of Sequence Tutor (Jaques et al., 2017), which was motivated by the idea of using an MLE-trained policy as a prior to guide the learning of the target policy in an RL framework. The formalism closely resembles SPG, namely (α > 0, β = 0), with the exception that the variational distribution q(y) in Sequence Tutor is a parameterized model instead of a nonparametric one as in SPG and our more general ERPO formulation. 3.2 MLE AS A SPECIAL CASE OF ERPO In this section, we connect the maximum likelihood estimation (MLE) algorithm to the unifying ERPO formalism. Based on the connections, we are able to analyze the learning behavior of MLE from the reinforcement learning perspective in terms of exploration efficiency. We also discuss some well-known variants of the vanilla MLE algorithm, such as RAML and data augmentation. Due to its simplicity and efficiency, MLE is among the most widely-used approaches in learning sequence generation. It finds the optimal parameter value that maximizes the data log-likelihood: θ∗ = arg maxθ LMLE(θ) = arg maxθ log pθ(y ∗). (5) We show that the MLE objective can be recovered from Eq.(2) with a specialized reward and weight values. More concretely, consider a δ-function reward defined as2: Rδ(y|y∗) = { 1 if y = y∗ −∞ otherwise. (6) That is, a sample y receives a valid unit reward only when it matches exactly with the true data, and receives a negative infinite reward in all other cases. We show that the MLE algorithm is a member of the ERPO family. In particular, the conventional MLE objective is equivalent to setting the ERPO components to (R = Rδ, α→ 0, β = 1). This can 2For token-level, define Rδ(y1:t|y∗) = t/T ∗ if y1:t = y∗1:t and −∞ otherwise, where T ∗ is the length of y∗. Note that the Rδ value of y = y∗ can also be set to any constant larger than −∞. be straightforwardly seen by noting that, with the configuration, the q(y) in E-step (Eq.2) reduces to q(y) = 1 if y = y∗ and 0 otherwise. The M-step is thus in effect maximizing the log-likelihood of the real data examples (Note that the very small α is still > 0, making the M-step for maximizing the objective Eq.(1) valid and necessary). With the δ-rewardRδ , any sample y that fails to match the given data y∗ exactly will get a negative infinite reward and thus never contribute to model learning. Exploration Efficiency Reformulating MLE in the unifying ERPO form enables us to directly compare the approach with other RL algorithms. Specifically, the δ-reward has permitted only samples that match training examples, and made invalid any exploration beyond the small set of training data (Figure 2(a)). The extremely restricted exploration at training time results in a brittle model that can easily encounter unseen states and make mistakes in prediction. On the other hand, however, a major advantage of the δ-reward is that it defines a distribution over the sequence space such that sampling from the distribution is reduced to simply picking an instance from the training set. The resulting samples are ensured to have high quality. This makes the MLE implementation very simple and the computation efficient in practice. On the contrary, task-specific rewards (such as BLEU) used in standard policy optimization are more diffused than the δ-reward, and thus allow exploration in a broader space with valid reward signals. However, the diffused rewards often do not lead to a distribution that is amenable for sampling as above. The model distribution is thus instead used to propose samples, which in turn can yield low-quality (i.e., low-reward) samples especially due to the huge sequence space. This makes the exploration inefficient or even impractical. Given the opposite behaviors of the algorithms in terms of exploration and computation efficiency, it is a natural idea to seek a middle ground between the two extremes in order to combine the advantages of both. Previous work has proposed variants of the vanilla MLE from different perspectives. We re-visit some of the popular approaches, and show that they can also be canonicalized in the ERPO framework and enrich our understanding of the learning behaviors. Data Noising Adding noise to training data is a widely adopted model regularizing technique. Previous work (e.g., Xie et al., 2017) has proposed several data noising strategies in the sequence generation context, such as replacing subsets of tokens with other random words. The resulting noisy data is then used in MLE training. Though previous literature has commonly seen such techniques as a data pre-processing step, we show that the approach can be expressed in the generalized ERPO formulation. Specifically, data noising can be seen as using a locally relaxed variant of the δ-reward: Rnoiseδ (y|y∗) = { 1 if y = g(y∗), −∞ otherwise, (7) where g denotes any transformation operation that returns a new sample as a noisy version of the input raw data y∗. With the relaxed reward, data noising locally expands the exploration surrounding the observed training examples (Figure 2(b)). The added exploration at training time can yield a model that is more robust to error at test time. Reward-Augmented Maximum Likelihood (RAML) RAML (Norouzi et al., 2016) was originally proposed to incorporate task-specific metrics into the MLE training. Formally, it introduces an exponentiated reward distribution e(y|y∗) ∝ exp{R(y|y∗)/τ}, where R is a task reward and τ > 0 is the temperature. The conventional RAML objective is written as: LRAML(θ) = Ey∼e(y|y∗) [ log pθ(y) ] . (8) That is, unlike MLE that directly maximizes the data log-likelihood, RAML first perturbs the data proportionally to the reward distribution, and maximizes the log-likelihood of the resulting samples. Similar to how we map MLE to the ERPO formalism, we can align RAML with the unifying form by setting α → 0, β to the temperature τ , and R to the task reward. Compared to the vanilla MLE, the key feature of RAML is the use of task reward instead of the δ-reward, which permits a larger exploration space surrounding the training examples. On the other hand, same as in SPG (section 3.1), sampling from the energy-based distribution with a diffused reward tends to be difficult, and often requires specialized approximations for computational efficiency (e.g., Ma et al., 2017). Other Algorithms & Discussions The classic policy gradient algorithm (Sutton et al., 2000) has also been used for sequence prediction (e.g., Ranzato et al., 2016). We We show in the appendix that the approach can also be connected to the unifying ERPO with moderate approximations. Ranzato et al. (2016) also proposed a mixing training strategy that anneals from MLE training to policy optimization. We show in the next section that the particular annealing scheme is a special case of the new, more general interpolation algorithm below. We have presented the framework in the context of sequence generation. The formulation can also be extended to other settings. For example, in game environments, y is a sequence of actions and states. The popular imitation learning method GAIL (Ho & Ermon, 2016) uses an adversarially induced reward R from data, and applies standard RL updates to train the policy. The policy update part can be formulated with our framework as standard policy optimization (with α > 0, β = 0). The new interpolation algorithm described in the next section can also be applied to improve the vanilla GAIL, as shown in the experiments. Previous work has also studied connections of relevant algorithms. For example, Norouzi et al. (2016); Koyamada et al. (2018) formulate MLE and policy gradient as minimizing the opposite KL divergences between the model and data/reward distributions. Misra et al. (2018) studied an update equation generalizing maximum marginal likelihood and policy gradient. Our framework differs in that we reformulate a different and more comprehensive set of algorithms for sequence prediction, and provide new insights in terms of exploration and its efficiency, which could not be derived from the previous work. Section 2 discusses more related work on sequence prediction learning. 4 INTERPOLATION ALGORITHM The unifying perspective also leads to new algorithms for improved learning. Here, we present an example algorithm that is naturally inspired by the framework. As in Figure 1, each of the learning algorithms can be seen as a point in the (R,α, β) space. Generally, from left to right, the reward gets more diffused and α gets larger, which results in larger sequence space exposed to model training (Figure 2). More exploration in turn also makes the training less efficient due to lower sample quality. We propose an interpolation algorithm with the natural idea of starting learning from the most restricted yet efficient algorithm configuration, and gradually expanding the exploration to decrease the training/test discrepancy. The easy-to-hard learning paradigm resembles the curriculum learning (Bengio et al., 2009). As we have mapped the algorithms to the points in the hyperparameter space, the interpolation becomes straightforward, which reduces to simple annealing of the hyperparameter values. Specifically, during training, we would like to anneal from using the restricted δ-reward Rδ to using task reward, and anneal from sampling (exploring) by only the reward R to sampling by both R and pθ. Since Rδ is a δ-function which would make direct function linear combination problematic, we implement the interpolation strategy in the update rule (Eq.2) and use log-sum-exp for mixing. Formally, let Rtask denote a task reward. The negative energy of q(y) in Eq.(2) (i.e., the exponent inside exp{·}) is now replaced with the interpolated term: log(λ1pθ+λ2 exp{Rtask}+λ3 exp{Rδ}). Note that we have re-organized the weight hyperparameters and used the distribution (λ1, λ2, λ3) to carry out the calibration role of (α, β). In particular, as training proceeds, we gradually increase λ1 and λ2 and decrease λ3. The formulation of interpolation in effect converts the energy-based model q(y) to a mixture of experts, which makes the sampling from q(y) easier, and resembles the Model BLEU MLE 31.99± 0.17 RAML 32.51± 0.37 MIXER 32.69± 0.09 MIXER-alike Anneal 32.65± 0.11 Self-critic 32.23± 0.15 SS 32.13± 0.14 Ours 33.35± 0.08 Table 1: Machine translation results (5-run average ± std dev). See the text for more details. Method ROUGE-1 ROUGE-2 ROUGE-L MLE 36.11± 0.21 16.39± 0.16 32.32± 0.19 RAML 36.30± 0.04 16.69± 0.20 32.49± 0.17 Self-critic 36.48± 0.24 16.84± 0.26 32.79± 0.26 SS 36.59± 0.12 16.79± 0.22 32.77± 0.17 Ours 36.72± 0.29 16.99± 0.17 32.95± 0.33 Table 2: Text summarization results (5-run average± std dev). bang-bang rewarded SPG method as described in (Ding & Soricut, 2017). Besides, similar to (Ding & Soricut, 2017), we adopt the token-level formulation (Eq.4), so that tokens in a sequence can be sampled from different components (i.e., pθ, Rtask, and Rδ) in a mixed way. We provide the pseudo-code of the interpolation algorithm in the appendix. As discussed above, we can also apply the interpolation algorithm in game imitation learning, by plugging it into the GAIL (Ho & Ermon, 2016) framework to replace the standard RL routine for policy update. The annealing schedule in this setting is constrained due to the agent interaction with the environment. Specifically, to generate a trajectory (a sequence of actions and states), we sample the beginning part from data (demonstrations), followed by sampling from either the model or reward. Note that data sampling can happen only before model/reward sampling, because the latter will interact with the environment and result in states that do not necessarily match the data. Similar to sequence generation, we gradually anneal from data sampling to model/reward sampling, and hence increase the exploration until converging to standard RL. Our experiments validate that the easy-to-hard training is superior to the vanilla GAIL which directly applies the hard RL update from the beginning. It is notable that (Ranzato et al., 2016) also developed an annealing strategy that mixes MLE and policy gradient training. The strategy is essentially the same as the one we apply in the GAIL learning setting. That is, the annealing approach of (Ranzato et al., 2016) is a specialized case of the above more general annealing, using restricted values of (λ1, λ2, λ3) and discrete changes. We provide more discussions in the appendix. The experiment results in section 5 show that our generalized annealing performs better than the restricted approach (Ranzato et al., 2016). 5 EXPERIMENTS We evaluate the interpolation algorithm in the context of both text generation and game imitation learning. Experiments are run with 4 GTX 2080Ti GPUs and 32GB RAM. The link to the code is provided in the submission. We will release the code upon acceptance. 5.1 MACHINE TRANSLATION We use the state-of-the-art neural architecture Transformer (Vaswani et al., 2017) as the base model. The model has 6 blocks, trained with an Adam optimizer with an initial learning rate of 0.001 and the same schedule as in (Vaswani et al., 2017). Batch size is 1,792 tokens. At test time, we use beam search decoding with a beam width of 5 and length penalty 0.6. We use the popular IWSLT2014 (Cettolo et al., 2014) German-English dataset. After proper pre-processing as described in the appendix, we obtain the final dataset with train/dev/test size of around 146K/7K/7K, respectively. The shared de-en vocabulary is of size 73,197 without BPE encoding. Table 1 shows the test-set BLEU scores of various methods. Besides MLE, RAML, and MIXER (Ranzato et al., 2016) as discussed above, we also compare with other existing approaches such as Scheduled Sampling (SS) (Bengio et al., 2015) and Self-critic (Rennie et al., 2017). (We did not compare with SPG (Ding & Soricut, 2017) as no public code is available.) From the table, we can see the various approaches provide improved performance over the vanilla MLE, as more sufficient exploration is made at training time. Our interpolation algorithm performs best, with significant improvement over the MLE training by 1.36 BLEU points. The results validate our approach that interpolates among the existing algorithms offers beneficial scheduled training. To further study the effect of our generalized annealing versus the MIXER strategy, we compare with “MIXER-alike Aneal” which uses the same configuration with our interpolation algorithm, except that the annealing is restricted like MIXER. That is, the first portion of tokens in a sequence are all sampled from the data, while the subsequent tokens are sampled from only the model or the task reward. We see that the proposed more generalized annealing is superior to the restricted version. We note that there is other work exploring various network architectures for machine translation (Shankar & Sarawagi, 2019; He et al., 2018), which is orthogonal and complementary to the learning algorithms. It would be interesting to explore the effect of combining the approaches. 5.2 TEXT SUMMARIZATION We use an attentional sequence-to-sequence model (Luong et al., 2015) where both the encoder and decoder are single-layer LSTM RNN. The dimensions of word embedding, RNN hidden state, and attention are all set to 256. We use Adam optimization with an initial learning rate of 0.001 and a batch size of 64. Test time uses beam search decoding with a beam width of 5. Please see the appendix for more configuration details. We use the popular English Gigaword corpus (Graff et al., 2003) for text summarization, and pre-processed the data following (Rush et al., 2015). The resulting dataset consists of 200K/8K/2K source-target pairs in train/dev/test sets, respectively. Following previous work (Ding & Soricut, 2017), we use the summation of the three ROUGE(-1, -2, -L) metrics as the reward in learning. Table 2 show the results on the test set. The proposed interpolation algorithm achieves the best performance on all three metrics. The RAML algorithm, which performed well in machine translation, falls behind other algorithms in text summarization. In contrast, our method consistently provides the best results. 5.3 GAME IMITATION LEARNING We apply the interpolation algorithm in GAIL (Ho & Ermon, 2016) as described in section 4. Following (Ho & Ermon, 2016), we simulate three environments with MuJoCo (Todorov et al., 2012). Expert demonstrations are generated by running PPO (Schulman et al., 2017b) under the given true reward functions. We then run different imitation learning algorithms with varying numbers of demonstrations. Both the policy and the discriminator are two-layer networks with 128 units each and tanh activations in between. Figure 3 shows the average returns by the agents. We can see that agents trained with the interpolation algorithm can generally improve over the vanilla GAIL, especially in the presence of small number (e.g., 1 or 4) of demonstrations. This shows that our approach that anneals from the MLE mode to RL mode can make better use of data examples, and steadily achieve better performance in the end. We present the learning curves of the algorithms in the appendix. 6 CONCLUSIONS We have presented a unifying perspective of a variety of learning algorithms for sequence prediction problems. The framework is based on a generalized entropy regularized policy optimization formulation, and we show the distinct algorithms are equivalent to specifying the reward and weight hyperparameters. The new consistent treatment provides systematic understanding and comparison across the algorithms, and inspires further improved learning. The proposed interpolation algorithm shows consistent improvement in machine translation, text summarization, and game imitation learning. A APPENDIX A.1 POLICY GRADIENT & MIXER Ranzato et al. (2016) made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm (Sutton et al., 2000). Policy gradient aims to maximizes the expected reward: LPG(θ) = Epθ [RPG(y|y ∗)] , (9) where RPG is usually a common reward function (e.g., BLEU). Taking gradient w.r.t θ gives: ∇θLPG(θ) = Epθ [RPG(y|y ∗)∇θ log pθ(y)] . (10) We now reveal the relation between the ERPO framework we present and the policy gradient algorithm. Starting from the M-step of Eq.(2) and setting (α = 1, β = 0) as in SPG (section ??), we use pθn as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): Eq [∇θ log pθ(y)] = Epθ [ q(y) pθ(y) ∇θ log pθ(y) ] = 1/Zθ · Epθ [ exp{R(y|y∗)} · ∇θ log pθ(y) ] , (11) where Zθ = ∫ y exp{log pθ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent. We can see that Eq.(11) recovers Eq.(10) if we further set R = logRPG, and omit the scaling factor Zθ . In other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = logRPG, α = 1, β = 0) and with Zθ omitted. The MIXER algorithm (Ranzato et al., 2016) incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically, given a ground-truth example y∗, the first m tokens y∗1:m are used for evaluating MLE loss, and starting from stepm+ 1, policy gradient objective is used. Them value decreases as training proceeds. With the relation between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ1, λ2, λ3). That is, for t < m in Eq.4 (i.e.,the first m steps), (λ1, λ2, λ3) is set to (0, 0, 1) and c = 1, namely the MLE training; while for t > m, (λ1, λ2, λ3) is set to (0.5, 0.5, 0) and c = 2. A.2 INTERPOLATION ALGORITHM Algorithm 1 summarizes the interpolation algorithm described in section 4. A.3 EXPERIMENTAL SETTINGS A.3.1 DATA PRE-PROCESSING For the machine translation dataset, we follow (Ma et al., 2017) for data pre-processing. In text summarization, we sampled 200K out of the 3.8M pre-processed training examples provided by (Rush et al., 2015) for the sake of training efficiency. We used the refined validation and test sets provided by (Zhou et al., 2017). In the game imitation learning task, we randomly sample 50 state-action pairs in each trajectory as demonstrations. Every training iteration, we collect at least 2,048 state-action pairs, and we train 1,000 iterations for every model in every environment. Algorithm 1 Interpolation Algorithm 1: Initialize model parameter θ and weights λ = (λ1, λ2, λ3) 2: repeat 3: Get training example y∗ 4: for t = 0, 1, . . . , T do 5: Sample z ∈ {1, 2, 3} ∼ (λ1, λ2, λ3) 6: if z = 1 then 7: Sample token yt ∼ exp{c · log pθ(yt|y1:t−1)} 8: else if z = 2 then 9: Sample token yt ∼ exp{c ·∆R(yt|y1:t−1,y∗)} 10: else 11: Sample token yt ∼ exp{c ·∆Rδ}, i.e., set yt = y∗t 12: end if 13: end for 14: Update θ by maximizing the log-likelihood log pθ(y) 15: Anneal λ by increasing λ1 and λ2 and decreasing λ3 16: until convergence A.3.2 ALGORITHM SETUP For RAML (Norouzi et al., 2016), we use the sampling approach (n-gram replacement) by (Ma et al., 2017) to sample from the exponentiated reward distribution. For each training example we draw 6 and 10 samples in machine translation and text summarization tasks, respectively. For Scheduled Sampling (SS) (Bengio et al., 2015), we tested various annealing schedules and report the best-performing one, namely inverse-sigmoid decay. The probability of sampling from model i = k/(k + exp (i/k)), where k is a hyperparameter controlling the speed of convergence, which is set to 4000 and 600 in the machine translation and text summarization tasks, respectively. We would like to note that SS does not fit into our formulation, because, in SS, model-generated tokens are only used as model inputs instead of the targets of which the likelihood is maximized. For example, at time step t, even though token ŷt generated by the model is used as an input to the next step, the loss associated with step t is still log pθ(y∗t |prev tokens) where y∗t is the true token. This differs from our formulation which maximizes the likelihood of ŷt. For the proposed interpolation algorithm, after MLE pre-training, we initialize the weights as (λ1, λ2, λ3) = (0.12, 0.16, 0.72). Every 4 epochs, we increase λ1 by 0.12 and λ2 by 0.16 while decreasing λ3 by 0.28. We did MLE pretraining for all comparison methods for the same number of steps. We found pretraining is necessary for Self-critic, and is helpful for RAML and SS. A.3.3 LEARNING CURVES OF GAIL EXPERIMENTS Figure 4 presents the learning curves of different algorithms in the GAIL experiments.
1. What is the main contribution of the paper on policy optimization? 2. What are the strengths and weaknesses of the proposed optimization framework? 3. How does the reviewer assess the novelty and effectiveness of the interpolation algorithm? 4. Are there any concerns regarding the annealing mechanism and its effectiveness? 5. Does the paper successfully connect the dots between MLE and RL?
Review
Review This paper claims to propose a general entropy regularized policy optimization paradigm. MLE and RL are special cases of this training paradigm. Paper is well written, and the experimental results are convincing enough. However, there are still some minor problems in the paper. For the optimization framework ERPO (shown in Equation 1), it consists of three parts, a cross-entropy term (Shannon entropy), a $p,q$ KL divergence term, and a reinforcement learning reward loss item. From the framework point of view, it is not like the author claim that is supposed to present a general optimization framework, including various optimization algorithms. Instead, it is just a combined loss through weight control and the selection of corresponding functions. It may not really theoretically work to unify various types of optimization algorithms for general cases, let alone claiming that this is a general optimization algorithm framework. For the interpolation algorithm (I regard this is the true technical contribution of this paper), the authors used an annealing mechanism to use different weights and functions at different stages of training. The essence is that after MLE pre-training, different optimization algorithms are used in different stages, and this should be the focus of the article. The annealing settings used is only introduced in the appendix simply. Without more comparison experiments, we cannot clearly get the conditions for the annealing algorithm to be effective and ineffective. For the title of connecting the dots between MLE and RL, this paper did not do so, MLE and RL are only used collaboratively, and this has also been mentioned in previous work. typo Page 6 Paragraph “Other Algorithms & Discussions”: We We show in the appendix… -> We show in the appendix…
ICLR
Title Connecting the Dots Between MLE and RL for Sequence Prediction Abstract Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation. The difference between them is characterized by the reward function and two weight hyperparameters. The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design. The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way. Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach. 1 INTRODUCTION Sequence prediction problem is ubiquitous in many applications, such as generating a sequence of words for machine translation (Wu et al., 2016; Sutskever et al., 2014), text summarization (Hovy & Lin, 1998; Rush et al., 2015), and image captioning (Vinyals et al., 2015; Karpathy & Fei-Fei, 2015), or taking a sequence of actions to complete a task. In these problems (e.g., Mnih et al., 2015; Ho & Ermon, 2016), we are often given a set of sequence examples, from which we want to learn a model that sequentially makes the next prediction (e.g., generating the next token) given the current state (e.g., the previous tokens). A standard training algorithm is based on supervised learning which seeks to maximize the loglikelihood of example sequences (i.e., maximum likelihood estimation, MLE). Despite the computational simplicity and efficiency, MLE training can suffer from compounding error (Ranzato et al., 2016; Ross & Bagnell, 2010) in that mistakes at test time accumulate along the way and lead to states far from the training data. Another line of approaches overcome the training/test discrepancy issue by resorting to the reinforcement learning (RL) techniques (Ranzato et al., 2016; Bahdanau et al., 2017; Rennie et al., 2017). For example, Ranzato et al. (2016) used policy gradient (Sutton et al., 2000) to train a text generation model with the task metric (e.g., BLEU) as reward. However, RL-based approaches can face challenges of prohibitively poor sample efficiency and high variance. To this end, a diverse set of methods has been developed that is in a middle ground between the two paradigms of MLE and RL. For example, RAML (Norouzi et al., 2016) adds reward-aware perturbation to the MLE data examples; SPG (Ding & Soricut, 2017) leverages reward distribution for effective sampling of policy gradient. Other approaches such as data noising (Xie et al., 2017) also show improved results. In this paper, we establish a unifying perspective of the above distinct learning algorithms. Specifically, we present a generalized entropy regularized policy optimization framework, and show that the diverse algorithms, such as MLE, RAML, data noising, and SPG, can all be re-formulated as special cases of the framework, with the only difference being the choice of reward and the values of two weight hyperparameters (Figure 1). In particular, we show MLE is equivalent to using a RAML Delta-function reward which returns 1 to model samples that match training examples exactly, and −∞ to any other samples. Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding brittle prediction behaviors. Other algorithms essentially use various locally-relaxed rewards, joint with the model distribution, for broader (and more costly) exploration during training. Besides the new views of the existing algorithms, the unifying perspective also leads to new algorithms for improved learning. We develop interpolation algorithm, which, as training proceeds, gradually expands the exploration space by annealing both the reward function and the weight hyperparameters. The annealing in effect dynamically interpolates among the existing algorithms from left to right in Figure 1. We conduct experiments on the tasks of text generation including machine translation and text summarization, and game imitation learning. The interpolation algorithm shows superior performance over various previous methods. 2 RELATED WORK Given a set of data examples, sequence prediction models are usually trained to maximize the loglikelihood of the next label (token, action) conditioning on the current state observed in the data. Reinforcement learning (RL) addresses the discrepancy between training and test by also using models’ own predictions at training time. Various RL approaches have been applied for sequence generation, such as policy gradient (Ranzato et al., 2016) and actor-critic (Bahdanau et al., 2017). Reward augmented maximum likelihood (RAML) (Norouzi et al., 2016) is an algorithm in between MLE and policy gradient. Mathematically, RAML shows that MLE and maximum-entropy policy gradient are respectively minimizing KL divergences in opposite directions. Koyamada et al. (2018) thus propose to use the more general α-divergence as a combination of the two paradigms. Our framework is developed in a different perspective, reformulates a different and more comprehensive set of algorithms, and leads to new insights in terms of exploration and learning efficiency of the various algorithms. Besides the algorithms discussed in the paper, there are other learning methods for sequence models. For example, Hal Daumé et al. (2009); Leblond et al. (2018); Wiseman & Rush (2016) use a learning-to-search paradigm for sequence generation or structured prediction. Scheduled Sampling (Bengio et al., 2015) and variants (Zhang et al., 2019) adapt MLE by randomly replacing ground-truth tokens with model predictions as the input for decoding the next-step token. Policy optimization for reinforcement learning is studied extensively in robotic and game environment. For example, Peters et al. (2010) introduce a relative entropy regularization to reduce information loss during learning. Schulman et al. (2015) develop a trust-region approach for monotonic improvement. Dayan & Hinton (1997); Levine (2018); Abdolmaleki et al. (2018) study the policy optimization algorithms in a probabilistic inference perspective. Zhu et al. (2018) combine imitation learning with RL, whose approach is orthogonal to ours and can be plugged into our framework to incorporate imitation reward. The entropy-regularized policy optimization formulation presented here can be seen as a generalization of many of the previous policy optimization methods. Besides, we formulate the framework primarily in the sequence generation context. 3 CONNECTING THE DOTS We first present a generalized formalism of entropy regularized policy optimization. The formulation contains a reward function and two weight hyperparameters that define the learning procedure. Therefore, varying the values of the reward and weights result in a large space of algorithms. We show that several existing popular algorithms, which were originally proposed in distinct perspectives, can all be seen as members in the space. In particular, we reformulate the MLE algorithm in the same policy optimization form, which enables side-by-side comparison between the broad spectrum of algorithms. The resulting unifying view provides new insights into the exploration and computation efficiency, and creates improved learning approaches for sequence prediction. For clarity, we present the framework in the sequence generation context. The formulations can straightforwardly be extended to other settings such as imitation learning in robotic and game environments, as discussed briefly at the end of this section and also shown in the experiment. We first establish the basic notations. Let y = (y1, . . . , yT ) be the sequence of T tokens. Let y∗ be a training example drawn from the empirical data distribution. From the sequence examples, we aim to learn a sequence generation model pθ(y) = ∏ t pθ(yt|y1:t−1) with parameters θ. Note that generation of y can condition on other factors. For example, in machine translation, y is the sentence in target language and depends on an input sentence in source language. For simplicity of notations, we omit the conditioning factors. 3.1 ENTROPY REGULARIZED POLICY OPTIMIZATION (ERPO) Policy optimization is a family of reinforcement learning (RL) algorithms. Assume a reward functionR(y|y∗) ∈ R that evaluates the quality of generation y against the true y∗. For example, BLEU score (Papineni et al., 2002) can be a reward in machine translation. The general goal of policy optimization is to learn the model pθ(y) (a.k.a policy)1 to maximize the expected reward. Previous work develops entropy regularized approaches, which augment the objective with information theoretic regularizers for stabilized training. We present a generalized variational formulation of ERPO, which, as we show shortly, has the power of subsuming an array of other popular algorithms. Specifically, we introduce a non-parametric variational distribution q(y) w.r.t the model pθ(y). The objective to maximize is as follows: L(q,θ) = Eq [R(y|y∗)]− αKL ( q(y)‖pθ(y) ) + βH(q), (1) where KL(·‖·) is the Kullback–Leibler divergence forcing q to stay close to pθ; H(·) is the Shannon entropy imposing maximum entropy assumption on q; and α and β are balancing weights of the respective terms. Intuitively, the objective is to maximize the expected reward under the variational distribution q while minimizing the distance between q and the model pθ, with maximum entropy regularization on q. The above formulation is relevant to and can be seen as a variant of previous policy optimization approaches in RL literature, such as relative entropy policy search (Peters et al., 2010), maximum entropy policy gradient (Ziebart, 2010; Haarnoja et al., 2017), and other work where the variational distribution q is formulated either as a non-parametric distribution as ours (Abdolmaleki et al., 2018; Peters et al., 2010) or parametric one (Schulman et al., 2015; 2017a; Teh et al., 2017). The objective can be maximized with a standard EM procedure (Neal & Hinton, 1998) that iterates two coordinate ascent steps optimizing q and θ, respectively. At iteration n: E-step: qn+1(y) ∝ exp { α log pθn(y) +R(y|y∗) α+ β } , M-step: θn+1 = arg maxθ Eqn+1 [ log pθ(y) ] . (2) In the E-step, q has a closed-form solution, which is an energy-based distribution. We can have an intuitive interpretation of its form. First, it is clear to see that if α → ∞, we have qn+1 = pnθ . This is also reflected in the objective Eq.(1) where a larger weight α encourages q to be close to pθ. Second, the weight β serves as the temperature of the q softmax distribution. In particular, a large temperature β → ∞ makes q a uniform distribution, which is consistent with the outcome of an infinitely large maximum entropy regularization in Eq.(1). In the M-step, the update rule can be interpreted as maximizing the log-likelihood of samples from the distribution q. 1In the following, we will use the term “model” and “policy” exchangeably. Token-level Formulation In the context of sequence generation, it is sometimes more convenient to express the equations at token level (instead of the sequence level), as shown when we devise a new algorithm in the next section. To this end, we decompose R(y|y∗) along the time steps: R(y|y∗) = ∑ t R(y1:t|y∗)−R(y1:t−1|y∗) := ∑ t ∆R(yt|y1:t−1,y∗), (3) where ∆R(yt|y∗,y1:t−1) measures the reward contributed by token yt. The solution of q in Eq.(2) can then be re-written as: qn+1(y) ∝ ∏ t exp { α log pθn(yt|y1:t−1) + ∆R(yt|y1:t−1,y∗) α+ β } . (4) The Algorithm Space The above ERPO formalism includes three key components, namely, the reward R and the weight hyperparameters α and β > 0. Variation in these components can result in different procedures of updating the model. In other words, different algorithms in the ERPO family correspond to a point (or a region) in the space spanned by the three components. The following sections visit a set of existing approaches, and connect them to the unifying picture by reformulating their seemingly distinct objectives. Figure 1 illustrates the particular algorithms in the space, clustered by the exploration behavior in learning, of which we will discuss more. Softmax Policy Gradient (SPG) We first briefly discuss the previous RL algorithms for sequence prediction that fit in the ERPO formalism. SPG (Ding & Soricut, 2017) was originally developed in the perspective of combining the rewardR and policy pθ to improve sampling quality. The algorithm is equivalent to setting β = 0 and treating α > 0 as the temperature of the energy-based distribution q(y). That is, q(y) in the E-step of Eq.(2) is now in the form q(y) ∝ pθ(y) exp{R(y|y∗)/α}. The reward R is set to any normal task-specific reward. Note that sampling from q(y) (e.g., in the M-step) is typically difficult due to its energy-based form and the fact that the task reward R often does not have particular structures amenable for sampling. We will see in the next section that the MLE algorithm in contrast uses a special reward to avoid the computational difficulty in sampling, at the cost of restricted exploration during training. We also note the previous work of Sequence Tutor (Jaques et al., 2017), which was motivated by the idea of using an MLE-trained policy as a prior to guide the learning of the target policy in an RL framework. The formalism closely resembles SPG, namely (α > 0, β = 0), with the exception that the variational distribution q(y) in Sequence Tutor is a parameterized model instead of a nonparametric one as in SPG and our more general ERPO formulation. 3.2 MLE AS A SPECIAL CASE OF ERPO In this section, we connect the maximum likelihood estimation (MLE) algorithm to the unifying ERPO formalism. Based on the connections, we are able to analyze the learning behavior of MLE from the reinforcement learning perspective in terms of exploration efficiency. We also discuss some well-known variants of the vanilla MLE algorithm, such as RAML and data augmentation. Due to its simplicity and efficiency, MLE is among the most widely-used approaches in learning sequence generation. It finds the optimal parameter value that maximizes the data log-likelihood: θ∗ = arg maxθ LMLE(θ) = arg maxθ log pθ(y ∗). (5) We show that the MLE objective can be recovered from Eq.(2) with a specialized reward and weight values. More concretely, consider a δ-function reward defined as2: Rδ(y|y∗) = { 1 if y = y∗ −∞ otherwise. (6) That is, a sample y receives a valid unit reward only when it matches exactly with the true data, and receives a negative infinite reward in all other cases. We show that the MLE algorithm is a member of the ERPO family. In particular, the conventional MLE objective is equivalent to setting the ERPO components to (R = Rδ, α→ 0, β = 1). This can 2For token-level, define Rδ(y1:t|y∗) = t/T ∗ if y1:t = y∗1:t and −∞ otherwise, where T ∗ is the length of y∗. Note that the Rδ value of y = y∗ can also be set to any constant larger than −∞. be straightforwardly seen by noting that, with the configuration, the q(y) in E-step (Eq.2) reduces to q(y) = 1 if y = y∗ and 0 otherwise. The M-step is thus in effect maximizing the log-likelihood of the real data examples (Note that the very small α is still > 0, making the M-step for maximizing the objective Eq.(1) valid and necessary). With the δ-rewardRδ , any sample y that fails to match the given data y∗ exactly will get a negative infinite reward and thus never contribute to model learning. Exploration Efficiency Reformulating MLE in the unifying ERPO form enables us to directly compare the approach with other RL algorithms. Specifically, the δ-reward has permitted only samples that match training examples, and made invalid any exploration beyond the small set of training data (Figure 2(a)). The extremely restricted exploration at training time results in a brittle model that can easily encounter unseen states and make mistakes in prediction. On the other hand, however, a major advantage of the δ-reward is that it defines a distribution over the sequence space such that sampling from the distribution is reduced to simply picking an instance from the training set. The resulting samples are ensured to have high quality. This makes the MLE implementation very simple and the computation efficient in practice. On the contrary, task-specific rewards (such as BLEU) used in standard policy optimization are more diffused than the δ-reward, and thus allow exploration in a broader space with valid reward signals. However, the diffused rewards often do not lead to a distribution that is amenable for sampling as above. The model distribution is thus instead used to propose samples, which in turn can yield low-quality (i.e., low-reward) samples especially due to the huge sequence space. This makes the exploration inefficient or even impractical. Given the opposite behaviors of the algorithms in terms of exploration and computation efficiency, it is a natural idea to seek a middle ground between the two extremes in order to combine the advantages of both. Previous work has proposed variants of the vanilla MLE from different perspectives. We re-visit some of the popular approaches, and show that they can also be canonicalized in the ERPO framework and enrich our understanding of the learning behaviors. Data Noising Adding noise to training data is a widely adopted model regularizing technique. Previous work (e.g., Xie et al., 2017) has proposed several data noising strategies in the sequence generation context, such as replacing subsets of tokens with other random words. The resulting noisy data is then used in MLE training. Though previous literature has commonly seen such techniques as a data pre-processing step, we show that the approach can be expressed in the generalized ERPO formulation. Specifically, data noising can be seen as using a locally relaxed variant of the δ-reward: Rnoiseδ (y|y∗) = { 1 if y = g(y∗), −∞ otherwise, (7) where g denotes any transformation operation that returns a new sample as a noisy version of the input raw data y∗. With the relaxed reward, data noising locally expands the exploration surrounding the observed training examples (Figure 2(b)). The added exploration at training time can yield a model that is more robust to error at test time. Reward-Augmented Maximum Likelihood (RAML) RAML (Norouzi et al., 2016) was originally proposed to incorporate task-specific metrics into the MLE training. Formally, it introduces an exponentiated reward distribution e(y|y∗) ∝ exp{R(y|y∗)/τ}, where R is a task reward and τ > 0 is the temperature. The conventional RAML objective is written as: LRAML(θ) = Ey∼e(y|y∗) [ log pθ(y) ] . (8) That is, unlike MLE that directly maximizes the data log-likelihood, RAML first perturbs the data proportionally to the reward distribution, and maximizes the log-likelihood of the resulting samples. Similar to how we map MLE to the ERPO formalism, we can align RAML with the unifying form by setting α → 0, β to the temperature τ , and R to the task reward. Compared to the vanilla MLE, the key feature of RAML is the use of task reward instead of the δ-reward, which permits a larger exploration space surrounding the training examples. On the other hand, same as in SPG (section 3.1), sampling from the energy-based distribution with a diffused reward tends to be difficult, and often requires specialized approximations for computational efficiency (e.g., Ma et al., 2017). Other Algorithms & Discussions The classic policy gradient algorithm (Sutton et al., 2000) has also been used for sequence prediction (e.g., Ranzato et al., 2016). We We show in the appendix that the approach can also be connected to the unifying ERPO with moderate approximations. Ranzato et al. (2016) also proposed a mixing training strategy that anneals from MLE training to policy optimization. We show in the next section that the particular annealing scheme is a special case of the new, more general interpolation algorithm below. We have presented the framework in the context of sequence generation. The formulation can also be extended to other settings. For example, in game environments, y is a sequence of actions and states. The popular imitation learning method GAIL (Ho & Ermon, 2016) uses an adversarially induced reward R from data, and applies standard RL updates to train the policy. The policy update part can be formulated with our framework as standard policy optimization (with α > 0, β = 0). The new interpolation algorithm described in the next section can also be applied to improve the vanilla GAIL, as shown in the experiments. Previous work has also studied connections of relevant algorithms. For example, Norouzi et al. (2016); Koyamada et al. (2018) formulate MLE and policy gradient as minimizing the opposite KL divergences between the model and data/reward distributions. Misra et al. (2018) studied an update equation generalizing maximum marginal likelihood and policy gradient. Our framework differs in that we reformulate a different and more comprehensive set of algorithms for sequence prediction, and provide new insights in terms of exploration and its efficiency, which could not be derived from the previous work. Section 2 discusses more related work on sequence prediction learning. 4 INTERPOLATION ALGORITHM The unifying perspective also leads to new algorithms for improved learning. Here, we present an example algorithm that is naturally inspired by the framework. As in Figure 1, each of the learning algorithms can be seen as a point in the (R,α, β) space. Generally, from left to right, the reward gets more diffused and α gets larger, which results in larger sequence space exposed to model training (Figure 2). More exploration in turn also makes the training less efficient due to lower sample quality. We propose an interpolation algorithm with the natural idea of starting learning from the most restricted yet efficient algorithm configuration, and gradually expanding the exploration to decrease the training/test discrepancy. The easy-to-hard learning paradigm resembles the curriculum learning (Bengio et al., 2009). As we have mapped the algorithms to the points in the hyperparameter space, the interpolation becomes straightforward, which reduces to simple annealing of the hyperparameter values. Specifically, during training, we would like to anneal from using the restricted δ-reward Rδ to using task reward, and anneal from sampling (exploring) by only the reward R to sampling by both R and pθ. Since Rδ is a δ-function which would make direct function linear combination problematic, we implement the interpolation strategy in the update rule (Eq.2) and use log-sum-exp for mixing. Formally, let Rtask denote a task reward. The negative energy of q(y) in Eq.(2) (i.e., the exponent inside exp{·}) is now replaced with the interpolated term: log(λ1pθ+λ2 exp{Rtask}+λ3 exp{Rδ}). Note that we have re-organized the weight hyperparameters and used the distribution (λ1, λ2, λ3) to carry out the calibration role of (α, β). In particular, as training proceeds, we gradually increase λ1 and λ2 and decrease λ3. The formulation of interpolation in effect converts the energy-based model q(y) to a mixture of experts, which makes the sampling from q(y) easier, and resembles the Model BLEU MLE 31.99± 0.17 RAML 32.51± 0.37 MIXER 32.69± 0.09 MIXER-alike Anneal 32.65± 0.11 Self-critic 32.23± 0.15 SS 32.13± 0.14 Ours 33.35± 0.08 Table 1: Machine translation results (5-run average ± std dev). See the text for more details. Method ROUGE-1 ROUGE-2 ROUGE-L MLE 36.11± 0.21 16.39± 0.16 32.32± 0.19 RAML 36.30± 0.04 16.69± 0.20 32.49± 0.17 Self-critic 36.48± 0.24 16.84± 0.26 32.79± 0.26 SS 36.59± 0.12 16.79± 0.22 32.77± 0.17 Ours 36.72± 0.29 16.99± 0.17 32.95± 0.33 Table 2: Text summarization results (5-run average± std dev). bang-bang rewarded SPG method as described in (Ding & Soricut, 2017). Besides, similar to (Ding & Soricut, 2017), we adopt the token-level formulation (Eq.4), so that tokens in a sequence can be sampled from different components (i.e., pθ, Rtask, and Rδ) in a mixed way. We provide the pseudo-code of the interpolation algorithm in the appendix. As discussed above, we can also apply the interpolation algorithm in game imitation learning, by plugging it into the GAIL (Ho & Ermon, 2016) framework to replace the standard RL routine for policy update. The annealing schedule in this setting is constrained due to the agent interaction with the environment. Specifically, to generate a trajectory (a sequence of actions and states), we sample the beginning part from data (demonstrations), followed by sampling from either the model or reward. Note that data sampling can happen only before model/reward sampling, because the latter will interact with the environment and result in states that do not necessarily match the data. Similar to sequence generation, we gradually anneal from data sampling to model/reward sampling, and hence increase the exploration until converging to standard RL. Our experiments validate that the easy-to-hard training is superior to the vanilla GAIL which directly applies the hard RL update from the beginning. It is notable that (Ranzato et al., 2016) also developed an annealing strategy that mixes MLE and policy gradient training. The strategy is essentially the same as the one we apply in the GAIL learning setting. That is, the annealing approach of (Ranzato et al., 2016) is a specialized case of the above more general annealing, using restricted values of (λ1, λ2, λ3) and discrete changes. We provide more discussions in the appendix. The experiment results in section 5 show that our generalized annealing performs better than the restricted approach (Ranzato et al., 2016). 5 EXPERIMENTS We evaluate the interpolation algorithm in the context of both text generation and game imitation learning. Experiments are run with 4 GTX 2080Ti GPUs and 32GB RAM. The link to the code is provided in the submission. We will release the code upon acceptance. 5.1 MACHINE TRANSLATION We use the state-of-the-art neural architecture Transformer (Vaswani et al., 2017) as the base model. The model has 6 blocks, trained with an Adam optimizer with an initial learning rate of 0.001 and the same schedule as in (Vaswani et al., 2017). Batch size is 1,792 tokens. At test time, we use beam search decoding with a beam width of 5 and length penalty 0.6. We use the popular IWSLT2014 (Cettolo et al., 2014) German-English dataset. After proper pre-processing as described in the appendix, we obtain the final dataset with train/dev/test size of around 146K/7K/7K, respectively. The shared de-en vocabulary is of size 73,197 without BPE encoding. Table 1 shows the test-set BLEU scores of various methods. Besides MLE, RAML, and MIXER (Ranzato et al., 2016) as discussed above, we also compare with other existing approaches such as Scheduled Sampling (SS) (Bengio et al., 2015) and Self-critic (Rennie et al., 2017). (We did not compare with SPG (Ding & Soricut, 2017) as no public code is available.) From the table, we can see the various approaches provide improved performance over the vanilla MLE, as more sufficient exploration is made at training time. Our interpolation algorithm performs best, with significant improvement over the MLE training by 1.36 BLEU points. The results validate our approach that interpolates among the existing algorithms offers beneficial scheduled training. To further study the effect of our generalized annealing versus the MIXER strategy, we compare with “MIXER-alike Aneal” which uses the same configuration with our interpolation algorithm, except that the annealing is restricted like MIXER. That is, the first portion of tokens in a sequence are all sampled from the data, while the subsequent tokens are sampled from only the model or the task reward. We see that the proposed more generalized annealing is superior to the restricted version. We note that there is other work exploring various network architectures for machine translation (Shankar & Sarawagi, 2019; He et al., 2018), which is orthogonal and complementary to the learning algorithms. It would be interesting to explore the effect of combining the approaches. 5.2 TEXT SUMMARIZATION We use an attentional sequence-to-sequence model (Luong et al., 2015) where both the encoder and decoder are single-layer LSTM RNN. The dimensions of word embedding, RNN hidden state, and attention are all set to 256. We use Adam optimization with an initial learning rate of 0.001 and a batch size of 64. Test time uses beam search decoding with a beam width of 5. Please see the appendix for more configuration details. We use the popular English Gigaword corpus (Graff et al., 2003) for text summarization, and pre-processed the data following (Rush et al., 2015). The resulting dataset consists of 200K/8K/2K source-target pairs in train/dev/test sets, respectively. Following previous work (Ding & Soricut, 2017), we use the summation of the three ROUGE(-1, -2, -L) metrics as the reward in learning. Table 2 show the results on the test set. The proposed interpolation algorithm achieves the best performance on all three metrics. The RAML algorithm, which performed well in machine translation, falls behind other algorithms in text summarization. In contrast, our method consistently provides the best results. 5.3 GAME IMITATION LEARNING We apply the interpolation algorithm in GAIL (Ho & Ermon, 2016) as described in section 4. Following (Ho & Ermon, 2016), we simulate three environments with MuJoCo (Todorov et al., 2012). Expert demonstrations are generated by running PPO (Schulman et al., 2017b) under the given true reward functions. We then run different imitation learning algorithms with varying numbers of demonstrations. Both the policy and the discriminator are two-layer networks with 128 units each and tanh activations in between. Figure 3 shows the average returns by the agents. We can see that agents trained with the interpolation algorithm can generally improve over the vanilla GAIL, especially in the presence of small number (e.g., 1 or 4) of demonstrations. This shows that our approach that anneals from the MLE mode to RL mode can make better use of data examples, and steadily achieve better performance in the end. We present the learning curves of the algorithms in the appendix. 6 CONCLUSIONS We have presented a unifying perspective of a variety of learning algorithms for sequence prediction problems. The framework is based on a generalized entropy regularized policy optimization formulation, and we show the distinct algorithms are equivalent to specifying the reward and weight hyperparameters. The new consistent treatment provides systematic understanding and comparison across the algorithms, and inspires further improved learning. The proposed interpolation algorithm shows consistent improvement in machine translation, text summarization, and game imitation learning. A APPENDIX A.1 POLICY GRADIENT & MIXER Ranzato et al. (2016) made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm (Sutton et al., 2000). Policy gradient aims to maximizes the expected reward: LPG(θ) = Epθ [RPG(y|y ∗)] , (9) where RPG is usually a common reward function (e.g., BLEU). Taking gradient w.r.t θ gives: ∇θLPG(θ) = Epθ [RPG(y|y ∗)∇θ log pθ(y)] . (10) We now reveal the relation between the ERPO framework we present and the policy gradient algorithm. Starting from the M-step of Eq.(2) and setting (α = 1, β = 0) as in SPG (section ??), we use pθn as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): Eq [∇θ log pθ(y)] = Epθ [ q(y) pθ(y) ∇θ log pθ(y) ] = 1/Zθ · Epθ [ exp{R(y|y∗)} · ∇θ log pθ(y) ] , (11) where Zθ = ∫ y exp{log pθ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent. We can see that Eq.(11) recovers Eq.(10) if we further set R = logRPG, and omit the scaling factor Zθ . In other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = logRPG, α = 1, β = 0) and with Zθ omitted. The MIXER algorithm (Ranzato et al., 2016) incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically, given a ground-truth example y∗, the first m tokens y∗1:m are used for evaluating MLE loss, and starting from stepm+ 1, policy gradient objective is used. Them value decreases as training proceeds. With the relation between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ1, λ2, λ3). That is, for t < m in Eq.4 (i.e.,the first m steps), (λ1, λ2, λ3) is set to (0, 0, 1) and c = 1, namely the MLE training; while for t > m, (λ1, λ2, λ3) is set to (0.5, 0.5, 0) and c = 2. A.2 INTERPOLATION ALGORITHM Algorithm 1 summarizes the interpolation algorithm described in section 4. A.3 EXPERIMENTAL SETTINGS A.3.1 DATA PRE-PROCESSING For the machine translation dataset, we follow (Ma et al., 2017) for data pre-processing. In text summarization, we sampled 200K out of the 3.8M pre-processed training examples provided by (Rush et al., 2015) for the sake of training efficiency. We used the refined validation and test sets provided by (Zhou et al., 2017). In the game imitation learning task, we randomly sample 50 state-action pairs in each trajectory as demonstrations. Every training iteration, we collect at least 2,048 state-action pairs, and we train 1,000 iterations for every model in every environment. Algorithm 1 Interpolation Algorithm 1: Initialize model parameter θ and weights λ = (λ1, λ2, λ3) 2: repeat 3: Get training example y∗ 4: for t = 0, 1, . . . , T do 5: Sample z ∈ {1, 2, 3} ∼ (λ1, λ2, λ3) 6: if z = 1 then 7: Sample token yt ∼ exp{c · log pθ(yt|y1:t−1)} 8: else if z = 2 then 9: Sample token yt ∼ exp{c ·∆R(yt|y1:t−1,y∗)} 10: else 11: Sample token yt ∼ exp{c ·∆Rδ}, i.e., set yt = y∗t 12: end if 13: end for 14: Update θ by maximizing the log-likelihood log pθ(y) 15: Anneal λ by increasing λ1 and λ2 and decreasing λ3 16: until convergence A.3.2 ALGORITHM SETUP For RAML (Norouzi et al., 2016), we use the sampling approach (n-gram replacement) by (Ma et al., 2017) to sample from the exponentiated reward distribution. For each training example we draw 6 and 10 samples in machine translation and text summarization tasks, respectively. For Scheduled Sampling (SS) (Bengio et al., 2015), we tested various annealing schedules and report the best-performing one, namely inverse-sigmoid decay. The probability of sampling from model i = k/(k + exp (i/k)), where k is a hyperparameter controlling the speed of convergence, which is set to 4000 and 600 in the machine translation and text summarization tasks, respectively. We would like to note that SS does not fit into our formulation, because, in SS, model-generated tokens are only used as model inputs instead of the targets of which the likelihood is maximized. For example, at time step t, even though token ŷt generated by the model is used as an input to the next step, the loss associated with step t is still log pθ(y∗t |prev tokens) where y∗t is the true token. This differs from our formulation which maximizes the likelihood of ŷt. For the proposed interpolation algorithm, after MLE pre-training, we initialize the weights as (λ1, λ2, λ3) = (0.12, 0.16, 0.72). Every 4 epochs, we increase λ1 by 0.12 and λ2 by 0.16 while decreasing λ3 by 0.28. We did MLE pretraining for all comparison methods for the same number of steps. We found pretraining is necessary for Self-critic, and is helpful for RAML and SS. A.3.3 LEARNING CURVES OF GAIL EXPERIMENTS Figure 4 presents the learning curves of different algorithms in the GAIL experiments.
1. What is the main contribution of the paper in the field of sequence modeling? 2. What are the strengths and weaknesses of the proposed unified view on training algorithms? 3. How does the reviewer assess the presentation and clarity of the paper's content? 4. What are the limitations of the experimental validation provided in the paper? 5. How does the reviewer evaluate the significance and impact of the proposed ERPO objective function?
Review
Review This submission belongs to the field of sequence modelling. In particular, this submission presents a unified view on a range of training algorithms including maximum likelihood (ML) and reinforcement learning (RL). The unified view presented I believe is interesting and could be of interest to a large community. Unfortunately this submission has two issues 1) presentation and 2) experimental validation. I find it peculiar that an objective function that features ML and variants of RL as special cases called ERPO is proposed by statement. I find it more likely that it came out by analysing ML, the variants of RL and other commonly used objective functions, noticing similarities between them and then formulating a function that would render all above as special cases. Had the order been different this submission would have been much more analytical and interesting to read. I find experimental results a bit limited and not entirely conclusive as it seem that MT provides the only strong experimental evidence. I find quite hard to interpret the significance of difference, for instance, between 36.72 and 36.59 in ROUGE-1.
ICLR
Title Connecting the Dots Between MLE and RL for Sequence Prediction Abstract Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation. The difference between them is characterized by the reward function and two weight hyperparameters. The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design. The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way. Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach. 1 INTRODUCTION Sequence prediction problem is ubiquitous in many applications, such as generating a sequence of words for machine translation (Wu et al., 2016; Sutskever et al., 2014), text summarization (Hovy & Lin, 1998; Rush et al., 2015), and image captioning (Vinyals et al., 2015; Karpathy & Fei-Fei, 2015), or taking a sequence of actions to complete a task. In these problems (e.g., Mnih et al., 2015; Ho & Ermon, 2016), we are often given a set of sequence examples, from which we want to learn a model that sequentially makes the next prediction (e.g., generating the next token) given the current state (e.g., the previous tokens). A standard training algorithm is based on supervised learning which seeks to maximize the loglikelihood of example sequences (i.e., maximum likelihood estimation, MLE). Despite the computational simplicity and efficiency, MLE training can suffer from compounding error (Ranzato et al., 2016; Ross & Bagnell, 2010) in that mistakes at test time accumulate along the way and lead to states far from the training data. Another line of approaches overcome the training/test discrepancy issue by resorting to the reinforcement learning (RL) techniques (Ranzato et al., 2016; Bahdanau et al., 2017; Rennie et al., 2017). For example, Ranzato et al. (2016) used policy gradient (Sutton et al., 2000) to train a text generation model with the task metric (e.g., BLEU) as reward. However, RL-based approaches can face challenges of prohibitively poor sample efficiency and high variance. To this end, a diverse set of methods has been developed that is in a middle ground between the two paradigms of MLE and RL. For example, RAML (Norouzi et al., 2016) adds reward-aware perturbation to the MLE data examples; SPG (Ding & Soricut, 2017) leverages reward distribution for effective sampling of policy gradient. Other approaches such as data noising (Xie et al., 2017) also show improved results. In this paper, we establish a unifying perspective of the above distinct learning algorithms. Specifically, we present a generalized entropy regularized policy optimization framework, and show that the diverse algorithms, such as MLE, RAML, data noising, and SPG, can all be re-formulated as special cases of the framework, with the only difference being the choice of reward and the values of two weight hyperparameters (Figure 1). In particular, we show MLE is equivalent to using a RAML Delta-function reward which returns 1 to model samples that match training examples exactly, and −∞ to any other samples. Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding brittle prediction behaviors. Other algorithms essentially use various locally-relaxed rewards, joint with the model distribution, for broader (and more costly) exploration during training. Besides the new views of the existing algorithms, the unifying perspective also leads to new algorithms for improved learning. We develop interpolation algorithm, which, as training proceeds, gradually expands the exploration space by annealing both the reward function and the weight hyperparameters. The annealing in effect dynamically interpolates among the existing algorithms from left to right in Figure 1. We conduct experiments on the tasks of text generation including machine translation and text summarization, and game imitation learning. The interpolation algorithm shows superior performance over various previous methods. 2 RELATED WORK Given a set of data examples, sequence prediction models are usually trained to maximize the loglikelihood of the next label (token, action) conditioning on the current state observed in the data. Reinforcement learning (RL) addresses the discrepancy between training and test by also using models’ own predictions at training time. Various RL approaches have been applied for sequence generation, such as policy gradient (Ranzato et al., 2016) and actor-critic (Bahdanau et al., 2017). Reward augmented maximum likelihood (RAML) (Norouzi et al., 2016) is an algorithm in between MLE and policy gradient. Mathematically, RAML shows that MLE and maximum-entropy policy gradient are respectively minimizing KL divergences in opposite directions. Koyamada et al. (2018) thus propose to use the more general α-divergence as a combination of the two paradigms. Our framework is developed in a different perspective, reformulates a different and more comprehensive set of algorithms, and leads to new insights in terms of exploration and learning efficiency of the various algorithms. Besides the algorithms discussed in the paper, there are other learning methods for sequence models. For example, Hal Daumé et al. (2009); Leblond et al. (2018); Wiseman & Rush (2016) use a learning-to-search paradigm for sequence generation or structured prediction. Scheduled Sampling (Bengio et al., 2015) and variants (Zhang et al., 2019) adapt MLE by randomly replacing ground-truth tokens with model predictions as the input for decoding the next-step token. Policy optimization for reinforcement learning is studied extensively in robotic and game environment. For example, Peters et al. (2010) introduce a relative entropy regularization to reduce information loss during learning. Schulman et al. (2015) develop a trust-region approach for monotonic improvement. Dayan & Hinton (1997); Levine (2018); Abdolmaleki et al. (2018) study the policy optimization algorithms in a probabilistic inference perspective. Zhu et al. (2018) combine imitation learning with RL, whose approach is orthogonal to ours and can be plugged into our framework to incorporate imitation reward. The entropy-regularized policy optimization formulation presented here can be seen as a generalization of many of the previous policy optimization methods. Besides, we formulate the framework primarily in the sequence generation context. 3 CONNECTING THE DOTS We first present a generalized formalism of entropy regularized policy optimization. The formulation contains a reward function and two weight hyperparameters that define the learning procedure. Therefore, varying the values of the reward and weights result in a large space of algorithms. We show that several existing popular algorithms, which were originally proposed in distinct perspectives, can all be seen as members in the space. In particular, we reformulate the MLE algorithm in the same policy optimization form, which enables side-by-side comparison between the broad spectrum of algorithms. The resulting unifying view provides new insights into the exploration and computation efficiency, and creates improved learning approaches for sequence prediction. For clarity, we present the framework in the sequence generation context. The formulations can straightforwardly be extended to other settings such as imitation learning in robotic and game environments, as discussed briefly at the end of this section and also shown in the experiment. We first establish the basic notations. Let y = (y1, . . . , yT ) be the sequence of T tokens. Let y∗ be a training example drawn from the empirical data distribution. From the sequence examples, we aim to learn a sequence generation model pθ(y) = ∏ t pθ(yt|y1:t−1) with parameters θ. Note that generation of y can condition on other factors. For example, in machine translation, y is the sentence in target language and depends on an input sentence in source language. For simplicity of notations, we omit the conditioning factors. 3.1 ENTROPY REGULARIZED POLICY OPTIMIZATION (ERPO) Policy optimization is a family of reinforcement learning (RL) algorithms. Assume a reward functionR(y|y∗) ∈ R that evaluates the quality of generation y against the true y∗. For example, BLEU score (Papineni et al., 2002) can be a reward in machine translation. The general goal of policy optimization is to learn the model pθ(y) (a.k.a policy)1 to maximize the expected reward. Previous work develops entropy regularized approaches, which augment the objective with information theoretic regularizers for stabilized training. We present a generalized variational formulation of ERPO, which, as we show shortly, has the power of subsuming an array of other popular algorithms. Specifically, we introduce a non-parametric variational distribution q(y) w.r.t the model pθ(y). The objective to maximize is as follows: L(q,θ) = Eq [R(y|y∗)]− αKL ( q(y)‖pθ(y) ) + βH(q), (1) where KL(·‖·) is the Kullback–Leibler divergence forcing q to stay close to pθ; H(·) is the Shannon entropy imposing maximum entropy assumption on q; and α and β are balancing weights of the respective terms. Intuitively, the objective is to maximize the expected reward under the variational distribution q while minimizing the distance between q and the model pθ, with maximum entropy regularization on q. The above formulation is relevant to and can be seen as a variant of previous policy optimization approaches in RL literature, such as relative entropy policy search (Peters et al., 2010), maximum entropy policy gradient (Ziebart, 2010; Haarnoja et al., 2017), and other work where the variational distribution q is formulated either as a non-parametric distribution as ours (Abdolmaleki et al., 2018; Peters et al., 2010) or parametric one (Schulman et al., 2015; 2017a; Teh et al., 2017). The objective can be maximized with a standard EM procedure (Neal & Hinton, 1998) that iterates two coordinate ascent steps optimizing q and θ, respectively. At iteration n: E-step: qn+1(y) ∝ exp { α log pθn(y) +R(y|y∗) α+ β } , M-step: θn+1 = arg maxθ Eqn+1 [ log pθ(y) ] . (2) In the E-step, q has a closed-form solution, which is an energy-based distribution. We can have an intuitive interpretation of its form. First, it is clear to see that if α → ∞, we have qn+1 = pnθ . This is also reflected in the objective Eq.(1) where a larger weight α encourages q to be close to pθ. Second, the weight β serves as the temperature of the q softmax distribution. In particular, a large temperature β → ∞ makes q a uniform distribution, which is consistent with the outcome of an infinitely large maximum entropy regularization in Eq.(1). In the M-step, the update rule can be interpreted as maximizing the log-likelihood of samples from the distribution q. 1In the following, we will use the term “model” and “policy” exchangeably. Token-level Formulation In the context of sequence generation, it is sometimes more convenient to express the equations at token level (instead of the sequence level), as shown when we devise a new algorithm in the next section. To this end, we decompose R(y|y∗) along the time steps: R(y|y∗) = ∑ t R(y1:t|y∗)−R(y1:t−1|y∗) := ∑ t ∆R(yt|y1:t−1,y∗), (3) where ∆R(yt|y∗,y1:t−1) measures the reward contributed by token yt. The solution of q in Eq.(2) can then be re-written as: qn+1(y) ∝ ∏ t exp { α log pθn(yt|y1:t−1) + ∆R(yt|y1:t−1,y∗) α+ β } . (4) The Algorithm Space The above ERPO formalism includes three key components, namely, the reward R and the weight hyperparameters α and β > 0. Variation in these components can result in different procedures of updating the model. In other words, different algorithms in the ERPO family correspond to a point (or a region) in the space spanned by the three components. The following sections visit a set of existing approaches, and connect them to the unifying picture by reformulating their seemingly distinct objectives. Figure 1 illustrates the particular algorithms in the space, clustered by the exploration behavior in learning, of which we will discuss more. Softmax Policy Gradient (SPG) We first briefly discuss the previous RL algorithms for sequence prediction that fit in the ERPO formalism. SPG (Ding & Soricut, 2017) was originally developed in the perspective of combining the rewardR and policy pθ to improve sampling quality. The algorithm is equivalent to setting β = 0 and treating α > 0 as the temperature of the energy-based distribution q(y). That is, q(y) in the E-step of Eq.(2) is now in the form q(y) ∝ pθ(y) exp{R(y|y∗)/α}. The reward R is set to any normal task-specific reward. Note that sampling from q(y) (e.g., in the M-step) is typically difficult due to its energy-based form and the fact that the task reward R often does not have particular structures amenable for sampling. We will see in the next section that the MLE algorithm in contrast uses a special reward to avoid the computational difficulty in sampling, at the cost of restricted exploration during training. We also note the previous work of Sequence Tutor (Jaques et al., 2017), which was motivated by the idea of using an MLE-trained policy as a prior to guide the learning of the target policy in an RL framework. The formalism closely resembles SPG, namely (α > 0, β = 0), with the exception that the variational distribution q(y) in Sequence Tutor is a parameterized model instead of a nonparametric one as in SPG and our more general ERPO formulation. 3.2 MLE AS A SPECIAL CASE OF ERPO In this section, we connect the maximum likelihood estimation (MLE) algorithm to the unifying ERPO formalism. Based on the connections, we are able to analyze the learning behavior of MLE from the reinforcement learning perspective in terms of exploration efficiency. We also discuss some well-known variants of the vanilla MLE algorithm, such as RAML and data augmentation. Due to its simplicity and efficiency, MLE is among the most widely-used approaches in learning sequence generation. It finds the optimal parameter value that maximizes the data log-likelihood: θ∗ = arg maxθ LMLE(θ) = arg maxθ log pθ(y ∗). (5) We show that the MLE objective can be recovered from Eq.(2) with a specialized reward and weight values. More concretely, consider a δ-function reward defined as2: Rδ(y|y∗) = { 1 if y = y∗ −∞ otherwise. (6) That is, a sample y receives a valid unit reward only when it matches exactly with the true data, and receives a negative infinite reward in all other cases. We show that the MLE algorithm is a member of the ERPO family. In particular, the conventional MLE objective is equivalent to setting the ERPO components to (R = Rδ, α→ 0, β = 1). This can 2For token-level, define Rδ(y1:t|y∗) = t/T ∗ if y1:t = y∗1:t and −∞ otherwise, where T ∗ is the length of y∗. Note that the Rδ value of y = y∗ can also be set to any constant larger than −∞. be straightforwardly seen by noting that, with the configuration, the q(y) in E-step (Eq.2) reduces to q(y) = 1 if y = y∗ and 0 otherwise. The M-step is thus in effect maximizing the log-likelihood of the real data examples (Note that the very small α is still > 0, making the M-step for maximizing the objective Eq.(1) valid and necessary). With the δ-rewardRδ , any sample y that fails to match the given data y∗ exactly will get a negative infinite reward and thus never contribute to model learning. Exploration Efficiency Reformulating MLE in the unifying ERPO form enables us to directly compare the approach with other RL algorithms. Specifically, the δ-reward has permitted only samples that match training examples, and made invalid any exploration beyond the small set of training data (Figure 2(a)). The extremely restricted exploration at training time results in a brittle model that can easily encounter unseen states and make mistakes in prediction. On the other hand, however, a major advantage of the δ-reward is that it defines a distribution over the sequence space such that sampling from the distribution is reduced to simply picking an instance from the training set. The resulting samples are ensured to have high quality. This makes the MLE implementation very simple and the computation efficient in practice. On the contrary, task-specific rewards (such as BLEU) used in standard policy optimization are more diffused than the δ-reward, and thus allow exploration in a broader space with valid reward signals. However, the diffused rewards often do not lead to a distribution that is amenable for sampling as above. The model distribution is thus instead used to propose samples, which in turn can yield low-quality (i.e., low-reward) samples especially due to the huge sequence space. This makes the exploration inefficient or even impractical. Given the opposite behaviors of the algorithms in terms of exploration and computation efficiency, it is a natural idea to seek a middle ground between the two extremes in order to combine the advantages of both. Previous work has proposed variants of the vanilla MLE from different perspectives. We re-visit some of the popular approaches, and show that they can also be canonicalized in the ERPO framework and enrich our understanding of the learning behaviors. Data Noising Adding noise to training data is a widely adopted model regularizing technique. Previous work (e.g., Xie et al., 2017) has proposed several data noising strategies in the sequence generation context, such as replacing subsets of tokens with other random words. The resulting noisy data is then used in MLE training. Though previous literature has commonly seen such techniques as a data pre-processing step, we show that the approach can be expressed in the generalized ERPO formulation. Specifically, data noising can be seen as using a locally relaxed variant of the δ-reward: Rnoiseδ (y|y∗) = { 1 if y = g(y∗), −∞ otherwise, (7) where g denotes any transformation operation that returns a new sample as a noisy version of the input raw data y∗. With the relaxed reward, data noising locally expands the exploration surrounding the observed training examples (Figure 2(b)). The added exploration at training time can yield a model that is more robust to error at test time. Reward-Augmented Maximum Likelihood (RAML) RAML (Norouzi et al., 2016) was originally proposed to incorporate task-specific metrics into the MLE training. Formally, it introduces an exponentiated reward distribution e(y|y∗) ∝ exp{R(y|y∗)/τ}, where R is a task reward and τ > 0 is the temperature. The conventional RAML objective is written as: LRAML(θ) = Ey∼e(y|y∗) [ log pθ(y) ] . (8) That is, unlike MLE that directly maximizes the data log-likelihood, RAML first perturbs the data proportionally to the reward distribution, and maximizes the log-likelihood of the resulting samples. Similar to how we map MLE to the ERPO formalism, we can align RAML with the unifying form by setting α → 0, β to the temperature τ , and R to the task reward. Compared to the vanilla MLE, the key feature of RAML is the use of task reward instead of the δ-reward, which permits a larger exploration space surrounding the training examples. On the other hand, same as in SPG (section 3.1), sampling from the energy-based distribution with a diffused reward tends to be difficult, and often requires specialized approximations for computational efficiency (e.g., Ma et al., 2017). Other Algorithms & Discussions The classic policy gradient algorithm (Sutton et al., 2000) has also been used for sequence prediction (e.g., Ranzato et al., 2016). We We show in the appendix that the approach can also be connected to the unifying ERPO with moderate approximations. Ranzato et al. (2016) also proposed a mixing training strategy that anneals from MLE training to policy optimization. We show in the next section that the particular annealing scheme is a special case of the new, more general interpolation algorithm below. We have presented the framework in the context of sequence generation. The formulation can also be extended to other settings. For example, in game environments, y is a sequence of actions and states. The popular imitation learning method GAIL (Ho & Ermon, 2016) uses an adversarially induced reward R from data, and applies standard RL updates to train the policy. The policy update part can be formulated with our framework as standard policy optimization (with α > 0, β = 0). The new interpolation algorithm described in the next section can also be applied to improve the vanilla GAIL, as shown in the experiments. Previous work has also studied connections of relevant algorithms. For example, Norouzi et al. (2016); Koyamada et al. (2018) formulate MLE and policy gradient as minimizing the opposite KL divergences between the model and data/reward distributions. Misra et al. (2018) studied an update equation generalizing maximum marginal likelihood and policy gradient. Our framework differs in that we reformulate a different and more comprehensive set of algorithms for sequence prediction, and provide new insights in terms of exploration and its efficiency, which could not be derived from the previous work. Section 2 discusses more related work on sequence prediction learning. 4 INTERPOLATION ALGORITHM The unifying perspective also leads to new algorithms for improved learning. Here, we present an example algorithm that is naturally inspired by the framework. As in Figure 1, each of the learning algorithms can be seen as a point in the (R,α, β) space. Generally, from left to right, the reward gets more diffused and α gets larger, which results in larger sequence space exposed to model training (Figure 2). More exploration in turn also makes the training less efficient due to lower sample quality. We propose an interpolation algorithm with the natural idea of starting learning from the most restricted yet efficient algorithm configuration, and gradually expanding the exploration to decrease the training/test discrepancy. The easy-to-hard learning paradigm resembles the curriculum learning (Bengio et al., 2009). As we have mapped the algorithms to the points in the hyperparameter space, the interpolation becomes straightforward, which reduces to simple annealing of the hyperparameter values. Specifically, during training, we would like to anneal from using the restricted δ-reward Rδ to using task reward, and anneal from sampling (exploring) by only the reward R to sampling by both R and pθ. Since Rδ is a δ-function which would make direct function linear combination problematic, we implement the interpolation strategy in the update rule (Eq.2) and use log-sum-exp for mixing. Formally, let Rtask denote a task reward. The negative energy of q(y) in Eq.(2) (i.e., the exponent inside exp{·}) is now replaced with the interpolated term: log(λ1pθ+λ2 exp{Rtask}+λ3 exp{Rδ}). Note that we have re-organized the weight hyperparameters and used the distribution (λ1, λ2, λ3) to carry out the calibration role of (α, β). In particular, as training proceeds, we gradually increase λ1 and λ2 and decrease λ3. The formulation of interpolation in effect converts the energy-based model q(y) to a mixture of experts, which makes the sampling from q(y) easier, and resembles the Model BLEU MLE 31.99± 0.17 RAML 32.51± 0.37 MIXER 32.69± 0.09 MIXER-alike Anneal 32.65± 0.11 Self-critic 32.23± 0.15 SS 32.13± 0.14 Ours 33.35± 0.08 Table 1: Machine translation results (5-run average ± std dev). See the text for more details. Method ROUGE-1 ROUGE-2 ROUGE-L MLE 36.11± 0.21 16.39± 0.16 32.32± 0.19 RAML 36.30± 0.04 16.69± 0.20 32.49± 0.17 Self-critic 36.48± 0.24 16.84± 0.26 32.79± 0.26 SS 36.59± 0.12 16.79± 0.22 32.77± 0.17 Ours 36.72± 0.29 16.99± 0.17 32.95± 0.33 Table 2: Text summarization results (5-run average± std dev). bang-bang rewarded SPG method as described in (Ding & Soricut, 2017). Besides, similar to (Ding & Soricut, 2017), we adopt the token-level formulation (Eq.4), so that tokens in a sequence can be sampled from different components (i.e., pθ, Rtask, and Rδ) in a mixed way. We provide the pseudo-code of the interpolation algorithm in the appendix. As discussed above, we can also apply the interpolation algorithm in game imitation learning, by plugging it into the GAIL (Ho & Ermon, 2016) framework to replace the standard RL routine for policy update. The annealing schedule in this setting is constrained due to the agent interaction with the environment. Specifically, to generate a trajectory (a sequence of actions and states), we sample the beginning part from data (demonstrations), followed by sampling from either the model or reward. Note that data sampling can happen only before model/reward sampling, because the latter will interact with the environment and result in states that do not necessarily match the data. Similar to sequence generation, we gradually anneal from data sampling to model/reward sampling, and hence increase the exploration until converging to standard RL. Our experiments validate that the easy-to-hard training is superior to the vanilla GAIL which directly applies the hard RL update from the beginning. It is notable that (Ranzato et al., 2016) also developed an annealing strategy that mixes MLE and policy gradient training. The strategy is essentially the same as the one we apply in the GAIL learning setting. That is, the annealing approach of (Ranzato et al., 2016) is a specialized case of the above more general annealing, using restricted values of (λ1, λ2, λ3) and discrete changes. We provide more discussions in the appendix. The experiment results in section 5 show that our generalized annealing performs better than the restricted approach (Ranzato et al., 2016). 5 EXPERIMENTS We evaluate the interpolation algorithm in the context of both text generation and game imitation learning. Experiments are run with 4 GTX 2080Ti GPUs and 32GB RAM. The link to the code is provided in the submission. We will release the code upon acceptance. 5.1 MACHINE TRANSLATION We use the state-of-the-art neural architecture Transformer (Vaswani et al., 2017) as the base model. The model has 6 blocks, trained with an Adam optimizer with an initial learning rate of 0.001 and the same schedule as in (Vaswani et al., 2017). Batch size is 1,792 tokens. At test time, we use beam search decoding with a beam width of 5 and length penalty 0.6. We use the popular IWSLT2014 (Cettolo et al., 2014) German-English dataset. After proper pre-processing as described in the appendix, we obtain the final dataset with train/dev/test size of around 146K/7K/7K, respectively. The shared de-en vocabulary is of size 73,197 without BPE encoding. Table 1 shows the test-set BLEU scores of various methods. Besides MLE, RAML, and MIXER (Ranzato et al., 2016) as discussed above, we also compare with other existing approaches such as Scheduled Sampling (SS) (Bengio et al., 2015) and Self-critic (Rennie et al., 2017). (We did not compare with SPG (Ding & Soricut, 2017) as no public code is available.) From the table, we can see the various approaches provide improved performance over the vanilla MLE, as more sufficient exploration is made at training time. Our interpolation algorithm performs best, with significant improvement over the MLE training by 1.36 BLEU points. The results validate our approach that interpolates among the existing algorithms offers beneficial scheduled training. To further study the effect of our generalized annealing versus the MIXER strategy, we compare with “MIXER-alike Aneal” which uses the same configuration with our interpolation algorithm, except that the annealing is restricted like MIXER. That is, the first portion of tokens in a sequence are all sampled from the data, while the subsequent tokens are sampled from only the model or the task reward. We see that the proposed more generalized annealing is superior to the restricted version. We note that there is other work exploring various network architectures for machine translation (Shankar & Sarawagi, 2019; He et al., 2018), which is orthogonal and complementary to the learning algorithms. It would be interesting to explore the effect of combining the approaches. 5.2 TEXT SUMMARIZATION We use an attentional sequence-to-sequence model (Luong et al., 2015) where both the encoder and decoder are single-layer LSTM RNN. The dimensions of word embedding, RNN hidden state, and attention are all set to 256. We use Adam optimization with an initial learning rate of 0.001 and a batch size of 64. Test time uses beam search decoding with a beam width of 5. Please see the appendix for more configuration details. We use the popular English Gigaword corpus (Graff et al., 2003) for text summarization, and pre-processed the data following (Rush et al., 2015). The resulting dataset consists of 200K/8K/2K source-target pairs in train/dev/test sets, respectively. Following previous work (Ding & Soricut, 2017), we use the summation of the three ROUGE(-1, -2, -L) metrics as the reward in learning. Table 2 show the results on the test set. The proposed interpolation algorithm achieves the best performance on all three metrics. The RAML algorithm, which performed well in machine translation, falls behind other algorithms in text summarization. In contrast, our method consistently provides the best results. 5.3 GAME IMITATION LEARNING We apply the interpolation algorithm in GAIL (Ho & Ermon, 2016) as described in section 4. Following (Ho & Ermon, 2016), we simulate three environments with MuJoCo (Todorov et al., 2012). Expert demonstrations are generated by running PPO (Schulman et al., 2017b) under the given true reward functions. We then run different imitation learning algorithms with varying numbers of demonstrations. Both the policy and the discriminator are two-layer networks with 128 units each and tanh activations in between. Figure 3 shows the average returns by the agents. We can see that agents trained with the interpolation algorithm can generally improve over the vanilla GAIL, especially in the presence of small number (e.g., 1 or 4) of demonstrations. This shows that our approach that anneals from the MLE mode to RL mode can make better use of data examples, and steadily achieve better performance in the end. We present the learning curves of the algorithms in the appendix. 6 CONCLUSIONS We have presented a unifying perspective of a variety of learning algorithms for sequence prediction problems. The framework is based on a generalized entropy regularized policy optimization formulation, and we show the distinct algorithms are equivalent to specifying the reward and weight hyperparameters. The new consistent treatment provides systematic understanding and comparison across the algorithms, and inspires further improved learning. The proposed interpolation algorithm shows consistent improvement in machine translation, text summarization, and game imitation learning. A APPENDIX A.1 POLICY GRADIENT & MIXER Ranzato et al. (2016) made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm (Sutton et al., 2000). Policy gradient aims to maximizes the expected reward: LPG(θ) = Epθ [RPG(y|y ∗)] , (9) where RPG is usually a common reward function (e.g., BLEU). Taking gradient w.r.t θ gives: ∇θLPG(θ) = Epθ [RPG(y|y ∗)∇θ log pθ(y)] . (10) We now reveal the relation between the ERPO framework we present and the policy gradient algorithm. Starting from the M-step of Eq.(2) and setting (α = 1, β = 0) as in SPG (section ??), we use pθn as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): Eq [∇θ log pθ(y)] = Epθ [ q(y) pθ(y) ∇θ log pθ(y) ] = 1/Zθ · Epθ [ exp{R(y|y∗)} · ∇θ log pθ(y) ] , (11) where Zθ = ∫ y exp{log pθ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent. We can see that Eq.(11) recovers Eq.(10) if we further set R = logRPG, and omit the scaling factor Zθ . In other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = logRPG, α = 1, β = 0) and with Zθ omitted. The MIXER algorithm (Ranzato et al., 2016) incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically, given a ground-truth example y∗, the first m tokens y∗1:m are used for evaluating MLE loss, and starting from stepm+ 1, policy gradient objective is used. Them value decreases as training proceeds. With the relation between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ1, λ2, λ3). That is, for t < m in Eq.4 (i.e.,the first m steps), (λ1, λ2, λ3) is set to (0, 0, 1) and c = 1, namely the MLE training; while for t > m, (λ1, λ2, λ3) is set to (0.5, 0.5, 0) and c = 2. A.2 INTERPOLATION ALGORITHM Algorithm 1 summarizes the interpolation algorithm described in section 4. A.3 EXPERIMENTAL SETTINGS A.3.1 DATA PRE-PROCESSING For the machine translation dataset, we follow (Ma et al., 2017) for data pre-processing. In text summarization, we sampled 200K out of the 3.8M pre-processed training examples provided by (Rush et al., 2015) for the sake of training efficiency. We used the refined validation and test sets provided by (Zhou et al., 2017). In the game imitation learning task, we randomly sample 50 state-action pairs in each trajectory as demonstrations. Every training iteration, we collect at least 2,048 state-action pairs, and we train 1,000 iterations for every model in every environment. Algorithm 1 Interpolation Algorithm 1: Initialize model parameter θ and weights λ = (λ1, λ2, λ3) 2: repeat 3: Get training example y∗ 4: for t = 0, 1, . . . , T do 5: Sample z ∈ {1, 2, 3} ∼ (λ1, λ2, λ3) 6: if z = 1 then 7: Sample token yt ∼ exp{c · log pθ(yt|y1:t−1)} 8: else if z = 2 then 9: Sample token yt ∼ exp{c ·∆R(yt|y1:t−1,y∗)} 10: else 11: Sample token yt ∼ exp{c ·∆Rδ}, i.e., set yt = y∗t 12: end if 13: end for 14: Update θ by maximizing the log-likelihood log pθ(y) 15: Anneal λ by increasing λ1 and λ2 and decreasing λ3 16: until convergence A.3.2 ALGORITHM SETUP For RAML (Norouzi et al., 2016), we use the sampling approach (n-gram replacement) by (Ma et al., 2017) to sample from the exponentiated reward distribution. For each training example we draw 6 and 10 samples in machine translation and text summarization tasks, respectively. For Scheduled Sampling (SS) (Bengio et al., 2015), we tested various annealing schedules and report the best-performing one, namely inverse-sigmoid decay. The probability of sampling from model i = k/(k + exp (i/k)), where k is a hyperparameter controlling the speed of convergence, which is set to 4000 and 600 in the machine translation and text summarization tasks, respectively. We would like to note that SS does not fit into our formulation, because, in SS, model-generated tokens are only used as model inputs instead of the targets of which the likelihood is maximized. For example, at time step t, even though token ŷt generated by the model is used as an input to the next step, the loss associated with step t is still log pθ(y∗t |prev tokens) where y∗t is the true token. This differs from our formulation which maximizes the likelihood of ŷt. For the proposed interpolation algorithm, after MLE pre-training, we initialize the weights as (λ1, λ2, λ3) = (0.12, 0.16, 0.72). Every 4 epochs, we increase λ1 by 0.12 and λ2 by 0.16 while decreasing λ3 by 0.28. We did MLE pretraining for all comparison methods for the same number of steps. We found pretraining is necessary for Self-critic, and is helpful for RAML and SS. A.3.3 LEARNING CURVES OF GAIL EXPERIMENTS Figure 4 presents the learning curves of different algorithms in the GAIL experiments.
1. What is the focus of the paper regarding policy optimization? 2. What are the strengths of the proposed formalism, particularly in its ability to encompass various policy gradient algorithms? 3. What are the concerns regarding the experimental results, specifically for text summarization? 4. How does the reviewer assess the significance of the proposed approach in comparison to prior works? 5. Are there any suggestions for improving the interpolation algorithm or expanding the exploration of space?
Review
Review This paper presents a formalism of entropy regularized policy optimization. They also show that various policy gradients algorithms can be reformulated as special instances of the presented novel formalism. The only difference between them being the reward function and two weight hyperparameters. Further, the paper proposes an interpolation algorithm, which, as training proceeds, gradually expands the exploration of space by annealing the reward function and the weight hyperparameters. Experiments on text generation tasks and game imitation learning show superior performance over previous methods. Overall, the paper is well written and the derivations and intuitions sound good. I appreciate the overall effort of the paper and the thorough experiments to validate the proposed interpolation algorithm, results seem not significant for text summarization. Hence, I suggest a week accept for this paper. Arguments: 1) From Table 1 and Table 2, the proposed approach has the lowest variance on machine translation and the quiet opposite on the text summarization (i.e., it has high variance). Any thoughts on this? This also suggests to conduct experiments on ablating the variance in the training for various policy gradient approaches include the proposed one. 2) Results seem not significant on the summarization tasks. Any thoughts on choosing this particular task? Why not try image captioning where most of these policy gradient approaches have been applied.
ICLR
Title Kernel Deformed Exponential Families for Sparse Continuous Attention Abstract Attention mechanisms take an expectation of a data representation with respect to probability weights. This creates summary statistics that focus on important features. Recently, Martins et al. (2020; 2021) proposed continuous attention mechanisms, focusing on unimodal attention densities from the exponential and deformed exponential families: the latter has sparse support. Farinhas et al. (2021) extended this to use Gaussian mixture attention densities, which are a flexible class with dense support. In this paper, we extend this to two general flexible classes: kernel exponential families (Canu & Smola, 2006) and our new sparse counterpart kernel deformed exponential families. Theoretically, we show new existence results for both kernel exponential and deformed exponential families, and that the deformed case has similar approximation capabilities to kernel exponential families. Experiments show that kernel deformed exponential families can attend to multiple compact regions of the data domain. 1 INTRODUCTION Attention mechanisms take weighted averages of data representations (Bahdanau et al., 2015), where the weights are a function of input objects. These are then used as inputs for prediction. Discrete attention 1) cannot easily handle data where observations are irregularly spaced 2) attention maps may be scattered, lacking focus. Martins et al. (2020; 2021) extended attention to continuous settings, showing that attention densities maximize the regularized expectation of a function of the data location (i.e. time). Special case solutions lead to exponential and deformed exponential families: the latter has sparse support. They form a continuous data representation and take expectations with respect to attention densities. Using measure theory to unify discrete and continuous approaches, they show transformer self-attention (Vaswani et al., 2017) is a special case of their formulation. Martins et al. (2020; 2021) explored unimodal attention densities: these only give high importance to one region of data. Farinhas et al. (2021) extended this to use multi-modal mixture of Gaussian attention densities. However 1) mixture of Gaussians do not lie in either the exponential or deformed exponential families, and are difficult to study in the context of Martins et al. (2020; 2021) 2) they have dense support. Sparse support can say that certain regions of data do not matter: a region of time has no effect on class probabilities, or a region of an image is not some object. We would like to use multimodal exponential and deformed exponential family attention densities, and understand how Farinhas et al. (2021) relates to the optimization problem of Martins et al. (2020; 2021). This paper makes three contributions: 1) we introduce kernel deformed exponential families, a multimodal class of densities with sparse support and apply it along with the multimodal kernel exponential families (Canu & Smola, 2006) to attention mechanisms. The latter have been used for density estimation, but not weighting data importance 2) we theoretically analyze normalization for both kernel exponential and deformed exponential families in terms of a base density and kernel, and show approximation properties for the latter 3) we apply them to real world datasets and show that kernel deformed exponential families learn flexible continuous attention densities with sparse support. Approximation properties for the kernel deformed are challenging: similar kernel exponential family results (Sriperumbudur et al., 2017) relied on standard exponential and logarithm properties to bound the difference of the log-partition functional at two functions: these do not hold for deformed analogues. We provide similar bounds via the functional mean value theorem along with bounding the Frechet derivative of the deformed log-partition functional. The paper is organized as follows: we review continuous attention (Martins et al., 2020; 2021). We then describe how mixture of Gaussian attention densities, used in Farinhas et al. (2021), solve a different optimization problem. We next describe kernel exponential families and give novel normalization condition relating the kernel growth to the base density’s tail decay. We then propose kernel deformed exponential families, which can have support over disjoint regions. We describe normalization and prove approximation capabilities. Next we describe use of these densities for continuous attention, including experiments where we show that the kernel deformed case learns multimodal attention densities with sparse support. We conclude with limitations and future work. 2 RELATED WORK Attention Mechanisms closely related are Martins et al. (2020; 2021); Farinhas et al. (2021); Tsai et al. (2019); Shukla & Marlin (2021). Martins et al. (2020; 2021) frame continuous attention as an expectation of a value function over the domain with respect to a density, where the density solves an optimization problem. They only used unimodal exponential and deformed exponential family densities: we extend this to the multimodal setting by leveraging kernel exponential families and proposing a deformed counterpart. Farinhas et al. (2021) proposed a multi-modal continuous attention mechanism via a mixture of Gaussians approach. We show that this solves a slightly different optimization problem from Martins et al. (2020; 2021), and extend to two further general density classes. Shukla & Marlin (2021) provide an attention mechanism for irregularly sampled time series by use of a continuous-time kernel regression framework, but do not actually take an expectation of a data representation over time with respect to a continuous pdf, evaluating the kernel regression model at a fixed set of time points to obtain a discrete representation. This describes importance of data at a set of points rather than over regions. Other papers connect attention and kernels, but focus on the discrete attention setting (Tsai et al., 2019; Choromanski et al., 2020). Also relevant are temporal transformer papers, including Xu et al. (2019); Li et al. (2019; 2020); Song et al. (2018). However none have continuous attention densities. Kernel Exponential Families Canu & Smola (2006) proposed kernel exponential families: Sriperumbudur et al. (2017) analyzed theory for density estimation. Wenliang et al. (2019) parametrized the kernel with a deep neural network. Other density estimation papers include Arbel & Gretton (2018); Dai et al. (2019); Sutherland et al. (2018). We apply kernel exponential families as attention densities to weight a value function which represents the data, rather than to estimate the data density, and extend similar ideas to kernel deformed exponential families with sparse support. Wenliang et al. (2019) showed a condition for an unnormalized kernel exponential family density to have a finite normalizer. However, they used exponential power base densities. We instead relate kernel growth rates to the base density tail decay, allowing non-symmetric base densities. To summarize our theoretical contributions: 1) introducing kernel deformed exponential families with approximation and normalization analysis 2) improved kernel exponential family normalization results. 3 CONTINUOUS ATTENTION MECHANISMS An attention mechanism involves: 1) the value function approximates a data representation. This may be the original data or a learned representation. 2) the attention density is chosen to be ’similar’ to another data representation, encoding it into a density 3) the context combines the two, taking an expectation of the value function with respect to the attention density. Formally, the context is c = ET∼p[V (T )]. (1) Here V (t) the value function approximates a data representation, T ∼ p(t) is the random variable or vector for locations (temporal, spatial, etc), and p(t) is the attention density. To choose the attention density p, one takes a data representation f and finds p ’similar’ to f and thus to a data representation, but regularizing p. Martins et al. (2020; 2021) did this, providing a rigorous formulation of attention mechanisms. Given a probability space (S,A, Q), letM1+(S) be the set of densities with respect to Q. Assume that Q is dominated by a measure ν (i.e. Lebesgue) and that it has density q0 = dQdν with respect to ν. Let S ⊆ R D, F be a function class, and Ω :M1+(S) → R be a lower semi-continuous, proper, strictly convex functional. Given f ∈ F , an attention density (Martins et al., 2020) p̂ : F → R≥0 solves p̂[f ] = arg max p∈M1+(S) 〈p, f〉L2(Q) − Ω(p). (2) This maximizes regularized L2 similarity between p and a data representation f . If Ω(p) is the negative differential entropy, the attention density is Boltzmann Gibbs p̂[f ](t) = exp(f(t)−A(f)), (3) where A(f) ensures ∫ S p̂[f ](t)dQ = 1. If f(t) = θTφ(t) for parameters and statistics θ ∈ RM , φ(t) ∈ RM respectively, Eqn. 3 becomes an exponential family density. For f in a reproducing kernel Hilbert space H, it becomes a kernel exponential family density (Canu & Smola, 2006), which we propose to use as an alternative attention density. One desirable class would be heavy or thin tailed exponential family-like densities. In exponential families, the support, or non-negative region of the density, is controlled by the measure Q. Letting Ω(p) be the α-Tsallis negative entropy Ωα(p) (Tsallis, 1988), Ωα(p) = { 1 α(α−1) (∫ S p(t)αdQ− 1 ) , α 6= 1;∫ S p(t) log p(t)dQ, α = 1, then p̂[f ] for f(t) = θTφ(t) lies in the deformed exponential family (Tsallis, 1988; Naudts, 2004) p̂Ωα [f ](t) = exp2−α(θ Tφ(t)−Aα(f)), (4) where Aα(f) again ensures normalization and the density uses the β-exponential expβ(t) = { [1 + (1− β)t]1/(1−β)+ , β 6= 1; exp(t), β = 1. (5) For β < 1, Eqn. 5 and thus deformed exponential family densities for 1 < α ≤ 2 can return 0 values. Values α > 1 (and thus β < 1) give thinner tails than the exponential family, while α < 1 gives fatter tails. Setting β = 0 is called sparsemax (Martins & Astudillo, 2016). In this paper, we assume 1 < α ≤ 2, which is the sparse case studied in Martins et al. (2020). We again propose to replace f(t) = θTφ(t) with f ∈ H, which leads to the novel kernel deformed exponential families. Computing Eqn. 1’s context vector requires parametrizing V (t). Martins et al. (2020) obtain a value function V : S → RD parametrized by B ∈ RD×N by applying regularized multivariate linear regression to estimate V (t; B) = BΨ(t), where Ψ = {ψn}Nn=1 is a set of basis functions. Let L be the number of observation locations (times in a temporal setting), O be the observation dimension, andN be the number of basis functions. This involves regressing the observation matrix H ∈ RO×L on a matrix F ∈ RN×L of basis functions {ψn}Nn=1 evaluated at observation locations {tl}Ll=1 B∗ = arg min B ‖BF−H‖2F + λ‖B‖2F . (6) 3.1 GAUSSIAN MIXTURE MODEL Farinhas et al. (2021) used mixture of Gaussian attention densities, but did not relate this to the optimization definition of attention densities in Martins et al. (2020; 2021). In fact their attention densities solve a related but different optimization problem. Martins et al. (2020; 2021) show that exponential family attention densities maximize a regularized linear predictor of the expected sufficient statistics of locations. In contrast, Farinhas et al. (2021) find a joint density over locations and latent states, and maximize a regularized linear predictor of the expected joint sufficient statistics. They then take the marginal location densities to be the attention densities. Let Ω(p) be Shannon entropy and consider two optimization problems: arg max p∈M1+(S) 〈θ,Ep[φ(T )]〉l2 − Ω(p) arg max p∈M1+(S) 〈θ,Ep[φ(T,Z)]〉l2 − Ω(p) The first is Eqn. 2 with f = θTφ(t) and rewritten to emphasize expected sufficient statistics. If one solves the second with variables Z, we recover an Exponential family joint density p̂Ωα [f ](t, z) = exp(θ Tφ(t, z)−A(θ)). This encourages the joint density of T,Z to be similar to a complete data representation θTφ(t, z) of both location variables T and latent variables Z, instead of encouraging the density of T to be similar to an observed data representation θTφ(t). The latter optimization is equivalent to arg max p∈M1+(S) Ω(p) s.t. Ep(T,Z)[φm(T,Z)] = cm,m = 1, · · · ,M. The constraint terms cm are determined by θ. Thus, this maximizes the joint entropy of Z and T , subject to constraints on the expected joint sufficient statistics. To recover EM learned Gaussian mixture densities, one must select φm so that the marginal distribution of T will be a mixture of Gaussians, and relate cm to the EM algorithm used to learn the mixture model parameters. For the first, assume that Z is a multinomial random variable taking |Z| possible values and let φ(t, z) = (z1, z2, · · · , z|Z|−1, I(z = 1)t, I(z = 1)t2, · · · , I(z = |Z|)t, I(z = |Z|)t2). These are multinomial sufficient statistics, followed by the sufficient statistics of |Z| Gaussians multiplied by indicators for each z. Then p(T |Z) will be Gaussian, p(Z) will be multinomial, and p(T ) will be a Gaussian mixture. For contraints, Farinhas et al. (2021) have Ep(T,Z)[φm(T,Z)] = L∑ l=1 wl |Z|∑ z=1 pold(z|tl)φm(tl, z),m = 1, · · · ,M (7) at each EM iteration. Here pold(z|xl) is the previous iteration’s latent state density conditional on the observation value, wl are discrete attention weights, and tl is a discrete attention location. That EM has this constraint was shown in Wang et al. (2012). Intuitively, this matches the expected joint sufficient statistics to those implied by discrete attention over locations, taking into account the dependence between z and tl given by old model parameters. An alternative is simply to let θ be the output of a neural network. While the constraints lack the intuition of Eqn. 7, it avoids the need to select an initialization. We focus on this case and use it for our baselines: both approaches are valid. 4 KERNEL EXPONENTIAL AND DEFORMED EXPONENTIAL FAMILIES We now use kernel exponential families and a new deformed counterpart to obtain flexible attention densities solving Eqn. 2 with the same regularizers. We first review kernel exponential families. We then give a novel theoretical result describing when an unnormalized kernel exponential family density can be normalized. Next we introduce kernel deformed exponential families, extending kernel exponential families to have either sparse support or fatter tails: we focus on the former. These can attend to multiple non-overlapping time intervals or spatial regions. We show similar normalization results based on the choice of kernel and base density. Following this we show approximation theory. We conclude by showing how to compute attention densities in practice. Kernel exponential families (Canu & Smola, 2006) extend exponential family distributions, replacing f(t) = θTφ(t) with f in a reproducing kernel Hilbert space H (Aronszajn, 1950) with kernel k : S × S → R. Densities can be written as p(t) = exp(f(t)−A(f)) = exp(〈f, k(·, t)〉H〉 −A(f)), where the second equality follows from the reproducing property. A challenge is to choose H, Q so that a normalizing constant exists, i.e., ∫ S exp(f(t))dQ < ∞. Kernel exponential family distributions can approximate any continuous density over a compact domain arbitrarily well in KL divergence, Hellinger, and Lp distance (Sriperumbudur et al., 2017). However relevant integrals including the normalizing constant generally require numerical integration. To avoid infinite dimensionality one generally assumes a representation of the form f = I∑ i=1 γik(·, ti), where for density estimation (Sriperumbudur et al., 2017) the ti are the observation locations. However, this requires using one parameter per observation value. This level of model complexity may not be necessary, and often one chooses a set of inducing points (Titsias, 2009) {ti}Ii=1 where I is less than the number of observation locations. For a given pair H, k, how can we choose Q to ensure that the normalization constant exists? We first give a simple example ofH, f and Q where the normalizing constant does not exist. Example 1. Let Q be the law of a N (0, 1) distribution and S = R. Let H = span{t3, t4} with k(x, y) = x3y3 + x4y4 and f(t) = t3 + t4 = k(t, 1). Then∫ S exp(f(t))dQ = ∫ R exp( t2 2 + t3 + t4)dt (8) where the integral diverges. 4.1 THEORY FOR KERNEL EXPONENTIAL FAMILIES We provide sufficient conditions for Q and H so that A(f) the log-partition function exists. We relateH’s kernel growth rate to the tail decay of the random variable or vector TQ with law Q. Proposition 1. Let p̃(t) = exp(f(t)) where f ∈ H an RKHS with kernel k. Assume k(t, t) ≤ Lk‖t‖ξ2 + Ck for constants Lk, Ck, ξ > 0. Let Q be the law of a random vector TQ, so that Q(A) = P (TQ ∈ A). Assume ∀u s.t. ‖u‖2 = 1, P (|uTTQ| ≥ z) ≤ Cq exp(−vzη) (9) for some constants η > ξ2 , CQ, v > 0. Then∫ S p̃(t)dQ <∞. Proof. See Appendix A.1 Based on k(t, t)’s growth, we can vary what tail decay rate for TQ ensures we can normalize p̃(t). Wenliang et al. (2019) also proved normalization conditions, but focused on random variables with exponential power density for a specific growth rate of k(t, t) rather than relating tail decay to growth rate. By focusing on tail decay, our result can be applied to non-symmetric base densities. Specific kernel bound growth rate terms ξ lead to allowing different tail decay rates. Corollary 1. For ξ = 4, TQ can be any sub-Gaussian random vector. For ξ = 2 it can be any sub-exponential. For ξ = 0 it can have any density. Proof. See Appendix A.2 4.2 KERNEL DEFORMED EXPONENTIAL FAMILIES We now propose kernel deformed exponential families: flexible sparse non-parametric distributions. These take deformed exponential families and extend them to use kernels in the deformed exponential term. This mirrors kernel exponential families. We write p(t) = exp2−α(f(t)−Aα(f)), where f ∈ H with kernel k. Fig. 1b shows that they can have support over disjoint intervals. 4.2.1 NORMALIZATION THEORY We construct a valid kernel deformed exponential family density from Q and f ∈ H. We first discuss the deformed log normalizer. In exponential family densities, the log-normalizer is the log of the normalizer. For deformed exponentials, the following holds. Lemma 1. Let Z > 0 be a constant. Then for 1 < α ≤ 2, 1 Z exp2−α(Z α−1f(t)) = exp2−α(f(t)− logα Z) where logβ t = t1−β−1 1−β if t > 0, β 6= 1; log(t) if t > 0, β = 1; undefined if t ≤ 0. Proof. See Appendix B.1 We describe a normalization sufficient condition analagous to Proposition 1 for the sparse deformed kernel exponential family. With Lemma 1, we can take an unnormalized exp2−α(f̃(t)) and derive a valid normalized kernel deformed exponential family density. We only require that an affine function of the terms in the deformed-exponential are negative for large magnitude t. Proposition 2. For 1 < α ≤ 2 assume p̃(t) = exp2−α(f̃(t)) with f̃ ∈ H,H is a RKHS with kernel k. If ∃Ct > 0 s.t. for ‖t‖2 > Ct, (α− 1)f̃(t) + 1 ≤ 0 and k(t, t) ≤ Lk‖t‖ξ2 + Ck for some ξ > 0, then ∫ S exp2−α(f̃(t))dQ <∞. Proof. See Appendix B.2 We now construct a valid kernel deformed exponential family density using the finite integral. Corollary 2. Under the conditions of proposition 2, assume exp2−α(f̃(t)) > 0 on a setA ⊆ S such that Q(A) > 0, then ∃ constants Z > 0, Aα(f) ∈ R such that for f(t) = 1Zα−1 f̃(t), the following holds ∫ S exp2−α(f(t)−Aα(f))dQ = 1. Proof. See Appendix B.3. We thus estimate f̃(t) = (Z)α−1f(t) and normalize to obtain a density of the desired form. 4.2.2 APPROXIMATION THEORY Under certain kernel conditions, kernel deformed exponential family densities can approximate densities of a similar form where the RKHS function is replaced with a C0(S) 1 function. 1continuous function on domain S vanishing at infinity Proposition 3. Define P0 = {πf (t) = exp2−α(f(t)−Aα(f)), t ∈ S : f ∈ C0(S)} where S ⊆ Rd. Suppose k(x, ·) ∈ C0(S),∀x ∈ S and∫ ∫ k(x, y)dµ(x)dµ(y) > 0,∀µ ∈Mb(S)\{0}. (10) hereMb(S) is the space of bounded measures over S. Then the set of deformed exponential families is dense in P0 wrt Lr(Q) norm and Hellinger distance. Proof. See Appendix B.4 We apply this to approximate fairly general densities with kernel deformed exponential families. Theorem 1. Let q0 ∈ C(S), such that q0(t) > 0 for all t ∈ S, where S ⊆ Rd is locally compact Hausdorff and q0(t) is the density ofQ with respect to a dominating measure ν. Suppose there exists l > 0 such that for any > 0,∃R > 0 satisfying |p(t)− l| ≤ for any t with ‖t‖2 > R. Define Pc = {p ∈ C(S) : ∫ S p(t)dQ = 1, p(t) ≥ 0,∀t ∈ S and p− l ∈ C0(S)}. Suppose k(t, ·) ∈ C0(S)∀t ∈ S and the kernel integration condition (Eqn. 10) holds. Then kernel deformed exponential families are dense in Pc wrt Lr norm, Hellinger distance and Bregman divergence for the α-Tsallis negative entropy functional. Proof. See Appendix B.5. For uniform q0, kernel deformed exponential families can thus approximate continuous densities on compact domains arbitrarily well. Our Bregman divergence result is analagous to the KL divergence result in Sriperumbudur et al. (2017). KL divergence is Bregman divergence with the Shannon entropy functional: we show the same for Tsallis entropy. The Bregman divergence here describes how close the uncertainty in a density is to its first order approximation evaluated at another density. Using the Tsallis entropy functional here is appropriate for deformed exponential families: they maximize it given expected sufficient statistics (Naudts, 2004). These results extend Sriperumbudur et al. (2017)’s approximation results to the deformed setting, where standard log and exponential rules cannot be applied. The Bregman divergence case requires bounding Frechet derivatives and applying the functional mean value theorem. 4.3 USING KERNELS FOR CONTINUOUS ATTENTION We apply kernel exponential and deformed exponential families to attention. The forward pass computes attention densities and the context vector. The backwards pass uses automatic differentiation. We assume a vector representation v ∈ R|v| computed from the locations we take an expectation over. For kernel exponential families we compute kernel weights {γi}Ii=1 for f(t) = ∑I i=1 γik(t, ti) γi = w T i v, and compute Z = ∫ S exp(f(t))dQ numerically. For the deformed case we compute γ̃i = wTi v and f̃(t) = (Z)α−1f(t) = ∑I i=1 γ̃ik(t, ti) followed by Z = ∫ S exp2−α(f̃(t))dQ. The context c = ET∼p[V (T )] = BEp[Ψ(t)] requires taking the expectation of Ψ(T ) with respect to a (possibly deformed) kernel exponential family density p. Unlike Martins et al. (2020; 2021), where they obtained closed form expectations, difficult normalizing constants prevent us from doing so. We thus use numerical integration for the forward pass and automatic differentiation for the backward pass. Algorithm 1 shows how to compute a continuous attention mechanism for a kernel deformed exponential family attention density. The kernel exponential family case is similar. Algorithm 1 Continuous Attention Mechanism via Kernel Deformed Exponential Families Choose base density q0(t) and kernel k. Inducing point locations {ti}Ii=1 Input Vector representation v of input object i.e. document representation Parameters {γ̃i}Ii=1 the weights for f̃(t) = (Z)α−1f(t) = ∑I i=1 γ̃ik(t, ti), matrix B for basis 5 EXPERIMENTS For document classification, we follow Martins et al. (2020)’s architecture. For the remaining, architectures have four parts: 1) an encoder takes a discrete representation of a time series and outputs attention density parameters. 2) The value function takes a time series representation (original or after passing through a neural network) and does (potentially multivariate) linear regression to obtain parameters B for a function V (t; B). These are combined to compute 3) context c = Ep[V (T )], which is used in a 4) classifier. Fig. 2 in the Appendices visualizes this. 5.1 DOCUMENT CLASSIFICATION We extend Martins et al. (2020)’s code2 for the IMDB sentiment classification dataset (Maas et al., 2011). This starts with a document representation v computed from a convolutional neural network and uses an LSTM attention model. We use a Gaussian base density and kernel, and divide the interval [0, 1] into I = 10 inducing points where we evaluate the kernel in f(t) = ∑I i=1 γik(t, ti). We set the bandwidth to be 0.01 for I = 10. Table 1 shows results. On average, kernel exponential and deformed exponential family slightly outperforms the continuous softmax and sparsemax, although the results are essentially the same. The continuous softmax/sparsemax results are from their code. 2Martins et al. (2020)’s repository for this dataset is https://github.com/deep-spin/quati 5.2 UWAVE DATASET We analyze uWave (Liu et al., 2009): accelerometer time series with eight gesture classes. We follow Li & Marlin (2016)’s split into 3,582 training observations and 896 test observations: sequences have length 945. We do synthetic irregular sampling and uniformly sample 10% of the observations. Table 2 shows results. Our highest accuracy is 94.26%, the unimodal case’s best is 74.69%, and the mixture’s best is 81.13%. Since this dataset is small, we report ±1.96 standard deviations from 10 runs. Fig. 1 shows that attention densities have support over non-overlapping time intervals. This cannot be done with Gaussian mixtures, and the intervals would be the same for each density in the exponential family case. Appendix C describes additional details. 6 ECG HEARTBEAT CLASSIFICATION We use the MIT Arrhythmia Database’s (Goldberger et al., 2000) Kaggle 3. The task is to detect abnormal heart beats from ECG signals. The five different classes are {Normal, Supraventricular premature, Premature ventricular contraction, Fusion of ventricular and normal, Unclassifiable}. There are 87,553 training samples and 21,891 test samples. Our value function is trained using the output of repeated convolutional layers: the final layer has 256 dimensions and 23 time points. Our encoder is a feedforward neural network with the original data as input, and our classifier is a feedforward network. Table 3 shows results. All accuracies are very similar, but the F1 score of kernel sparsemax is drastically higher. Additional details are in Appendix D. 7 DISCUSSION In this paper we extend continuous attention mechanisms to use kernel exponential and deformed exponential family attention densities. The latter is a new flexible class of non-parametric densities with sparse support. We show novel existence properties for both kernel exponential and deformed exponential families, and prove approximation properties for the latter. We then apply these to the framework described in Martins et al. (2020; 2021) for continuous attention. We show results on three datasets: sentiment classification, gesture recognition, and arrhythmia classification. In the first case performance is similar to unimodal attention, for the second it is drastically better, and in the third it is similar in the dense case and drastically better in the sparse case. 7.1 LIMITATIONS AND FUTURE WORK A limitation of this work was the use of numerical integration, which scales poorly with the dimensionality of the locations. Because of this we restricted our applications to temporal and text data. This still allows for multiple observation dimensions at a given location. A future direction would be to use varianced reduced Monte Carlo to approximate the integral, as well as studying how to choose the number of basis functions in the value function and how to choose the number of inducing points. 3https://www.kaggle.com/mondejar/mitbih-database A PROOF RELATED TO PROPOSITION 1 A.1 PROOF OF PROPOSITION 1 Proof. This proof has several parts. We first bound the RKHS function f and use the general tail bound we assumed to give a tail bound for the one dimensional marginals TQd of TQ. Using the RKHS function bound, we then bound the integral of the unnormalized density in terms of expectations with respect to these finite dimensional marginals. We then express these expectations over finite dimensional marginals as infinite series of integrals. For each integral within the infinite series, we use the finite dimensional marginal tail bound to bound it, and then use the ratio test to show that the infinite series converges. This gives us that the original unnormalized density has a finite integral. We first note, following Wenliang et al. (2019), that the bound on the kernel in the assumption allows us to bound f in terms of two constants and the absolute value of the point at which it is evaluated. |f(t)| = |〈f, k(t, ·)〉H| reproducing property ≤ ‖f‖H‖k(t, ·)‖H Cauchy Schwarz = ‖f‖H √ 〈k(t, ·), k(t, ·)〉H = ‖f‖H √ k(t, t) ≤ ‖f‖H √ Lk‖t‖ξ + Ck by assumption ≤ C0 + C1‖t‖|ξ|/2 for some C1, C2 > 0. We can write TQ = (TQ1, · · · , TQD). Let ed be a standard Euclidean basis vector. Then by the assumption and setting u = ed we have P (|TQd| ≥ z) ≤ Cq exp(−vzη) Letting Qd be the marginal law,∫ S exp(f(t))dQ ≤ ∫ S exp(C0 + C1‖t‖ξ/2)dQ = exp(C0) ∫ S exp(C1‖t‖ξ/2)dQ = exp(C0)E exp(C1‖TQ‖ξ/2) ≤ exp(C0)E exp(C1( √ d max d=1,··· ,D |TQd|)ξ/2) ≤ exp(C0) D∑ d=1 E exp(C2|TQd|ξ/2) which will be finite if each E exp(C2|TQd|ξ/2) < ∞. Now letting Sd be the relevant dimension of S, E exp(C2|TQd|ξ/2) = ∫ Sd exp(C2|td|ξ/2)dQd ≤ −1∑ j=−∞ ∫ j+1 j exp(C2|td|ξ/2)dQd + ∞∑ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd where the inequality follows since Sd ⊆ R, exp is a non-negative function and probability measures are monotonic. We will show that the second sum converges. Similar techniques can be shown for the first sum. Note that for j ≥ 0 Qd([j, j + 1)) = P (Td ≥ j)− P (Td ≥ j + 1) ≤ P (Td ≥ j) ≤ Cq exp(−vjη) by assumption Then ∞∑ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd ≤ ∞∑ j=0 exp(C2|j|ξ/2)Qd([j, j + 1)) ≤ ∞∑ j=0 CQ exp(C2|j|ξ/2 − vjη) Let aj = exp(C2|j|ξ/2 − viη). We will use the ratio test to show that the RHS converges. We have∣∣∣∣aj+1aj ∣∣∣∣ = exp(C2((j + 1)ξ/2 − jξ/2)− v[(j + 1)η − jη]). (11) We want this ratio to be < 1 for large j. We thus need to select η so that for sufficiently large j, we have C1 v ((j + 1)ξ/2 − jξ/2) < [(j + 1)η − jη]. Assume that η > ξ2 . Then (j + 1)η − jη (j + 1)ξ/2 − jξ/2 = jη[(1 + 1j ) η − 1] jξ/2[(1 + 1j ) ξ/2 − 1] ≥ jη−ξ/2. Since the RHS is unbounded for η > ξ2 , we have that Eqn. 11 holds for sufficiently large j. By the ratio test Eqd(t) exp(C2|Td|ξ/2) = ∑−1 j=−∞ ∫ j+1 j exp(C2|td|ξ/2)dQd +∑∞ j=0 ∫ j+1 j exp(C2|td|ξ/2)dQd is finite. Thus putting everything together we have∫ S exp(f(t))dQ ≤ ∫ S exp(C0 + C1‖t‖ξ/2)dQ < exp(C0) D∑ d=1 E exp(C2|TQd|ξ/2) <∞ and p̃(t) can be normalized. A.2 PROOF OF COROLLARY 1 Proof. Let ξ = 4. Then η > 2 and P (|uTT | > t) ≤ P (|uTT | ≥ t) monotonicity ≤ CQ exp(−vtη) < CQ exp(−vt2). The second case is similar. For the uniformly bounded kernel,∫ S exp(〈f, k(·, t)〉H)dQ ≤ exp(‖f‖H √ Ck) ∫ S dQ = exp(‖f‖H √ Ck) <∞. The first line follows from Cauchy Schwarz and ξ = 0 B PROOFS RELATED TO KERNEL DEFORMED EXPONENTIAL FAMILY B.1 PROOF OF LEMMA 1 Proof. The high level idea is to express a term inside the deformed exponential family that becomes 1/Z once outside. exp2−α(f(t)− logα(Z)) = [1 + (α− 1)(f(t)− logα Z)] 1 α−1 + = [1 + (α− 1)f(t)− (α− 1)Z 1−α − 1 1− α ] 1 α−1 + = [1 + (α− 1)f(t) + Z1−α − 1] 1 α−1 + = [(α− 1)f(t) + Z1−α] 1 α−1 + = [(α− 1)f(t)Z α−1 Zα−1 + Z1−α)] 1 α−1 + = 1 Z [(α− 1)f(t) 1 Zα−1 + 1] 1 α−1 + = 1 Z exp2−α(Z α−1f(t)) B.2 PROOF OF PROPOSITION 2 Proof. ∫ S exp2−α(f̃(t))dQ = ∫ S [1 + (α− 1)f̃(t)] 1 α−1 + dQ = ∫ ‖t‖≤Ct [1 + (α− 1)f̃(t)] 1 α−1 + dQ ≤ ∫ ‖t‖≤Ct [1 + (α− 1)(C0 + C1|Ct|ξ/2)] 1 α−1 + dQ <∞ B.3 PROOF OF COROLLARY 2 Proof. From proposition 2 and the assumption,∫ S exp2−α(f̃(t))dQ = Z for some Z > 0. Then ∫ S 1 Z exp2−α(Z α−1f(t))dQ = 1∫ S exp2−α(f(t)− logα Z)dQ = 1 where the second line follows from lemma 1. Set Aα(f) = logα(Z) and we are done. B.4 PROOF OF PROPOSITION 3 Proof. The kernel integration condition tells us that H is dense in C0(S) with respect to L∞ norm. This was shown in Sriperumbudur et al. (2011). For the Lr norm, we apply ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ from Lemma 5 with f ∈ C0(S), g ∈ H, and f0 = f . L1 convergence implies Hellinger convergence. B.5 PROOF OF THEOREM 1 Proof. For any p ∈ Pc, define pδ = p+δ1+δ . Then ‖p− pδ‖r = δ 1 + δ ‖p‖r → 0 for 1 ≤ r ≤ ∞. Thus for any > 0,∃δ > 0 such that for any 0 < θ < δ , we have ‖p− pθ‖r ≤ , where pθ(t) > 0 for all t ∈ S. Define f = ( 1+θ l+θ )1−α log2−α pθ 1+θ l+θ . Since p ∈ C(S), so is f . Fix any η > 0 and note that f(t) ≥ η( 1 + θ l + θ )1−α log2−α pθ 1 + θ l + θ ≥ η log2−α pθ 1 + θ l + θ ≥ ( 1 + θ l + θ )α−1 η pθ 1 + θ l + θ ≥ exp2−α (( 1 + θ l + θ )α−1 η ) pθ ≥ l + θ 1 + θ exp2−α (( 1 + θ l + θ )α−1 η ) p− l ≥ (l + θ) ( exp2−α (( 1 + θ l + θ )α−1 η ) − 1 ) Thus A = {t : f(t) ≥ η} = { p− l ≥ (l + θ) ( exp2−α (( 1 + θ l + θ )α−1 η ) − 1 )} Since p − l ∈ C0(S) the set on the second line is bounded. Thus A is bounded so that f ∈ C0(S). Further, by Lemma 1 pθ = exp2−α ( f − logα 1 + θ l + θ ) giving us pθ ∈ P0. By Proposition 3 there is some pg in the deformed kernel exponential family so that ‖pθ − pg‖Lr(S) ≤ . Thus ‖p − pg‖r ≤ 2 for any 1 ≤ r ≤ ∞. To show convergence in Helinger distance, note H2(p, pg) = 1 2 ∫ S ( √ p−√pg)2dQ = 1 2 ∫ S (p− 2√ppg + pg)dQ ≤ 1 2 ∫ S (p− 2 min(p, pg) + pg)dQ = 1 2 ∫ S |p− pg|dQ = 1 2 ‖p− pg‖1 4actually an equality, see https://www2.cs.uic.edu/ zhangx/teaching/bregman.pdf for proof so that L1(S) convergence, which we showed, implies Hellinger convergence. Let us consider the Bregman divergence. Note the generalized triangle inequality4 for Bregman divergence BΩα(p, pg) = BΩα(p, pθ)︸ ︷︷ ︸ I +BΩα(pθ, pg)︸ ︷︷ ︸ II −〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2︸ ︷︷ ︸ III (12) Term I BΩα(p, pθ) = 1 α(α− 1) ∫ S (pα − pαθ )dQ− 〈∇Ωα(pθ), p− pθ〉 = 1 α(α− 1) ∫ S (pα − pαθ )dQ− 1 α− 1 ∫ pα−1θ (p− pθ)dQ ≤ 1 α(α− 1) ∫ S (pα − pαθ )dQ+ 1 α− 1 ‖pα−1θ ‖1‖‖p− pθ‖∞ The first term on the rhs clearly vanishes as θ → 0. For the second term, we already showed that ‖p− pθ‖∞ → 0. Term II Fix θ. Then term II converges to 0 by Lemma 5. Term III For term III , 〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2 ≤ ‖p− pθ‖∞‖∇Ωα(pθ)−∇Ωα(pg)‖1 Clearly the first term on the rhs converges by Lr convergence. The L1 term for the gradient is given by ‖∇Ωα(pθ)−∇Ωα(pg)‖1 = 1 α− 1 ∫ |pθ(t)α−1 − pg(t)α−1|dQ ≤ ∫ (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞dQ Eqn. 17 = (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞ so that the inner product terms are bounded as |〈p− pθ,∇Ωα(pθ)−∇Ωα(pg)〉2| ≤ (‖pθ‖∞ + ‖pθ − pg‖∞)α−2‖pθ − pg‖∞‖p− pθ‖∞ Lemma 2. (Functional Taylor’s Theorem) Let F : X → R where X is a Banach space. Let f, g ∈ X and let F be k times Gateaux differentiable. Then we can write F (g) = k−1∑ i=0 F i(f)(g − f)i + F k(f + c(g − f))(g − f)k for some c ∈ [0, 1]. Proof. This is simply a consequence of expressing a functional as a function of an ∈ [0, 1], which restricts the functional to take input functions between two functions. We then apply Taylor’s theorem to the function and apply the chain rule for Gateaux derivatives to obtain the resulting Taylor remainder theorem for functionals. Let G(η) = F (f + η(g − f)). By Taylor’s theorem we have G(1) = G(0) +G′(0) + · · ·+Gk(c) and applying the chain rule gives us F (g) = k−1∑ i=0 F i(f)(g − f)i + F k(f + c(g − f))(g − f)k Lemma 3. (Functional Mean Value Theorem) Let F : X → R be a functional where f, g ∈ X some Banach space with norm ‖ · ‖. Then |F (f)− F (g)| ≤ ‖F ′(h)‖op‖f − g‖ where h = g + c(f − g) for some c ∈ [0, 1], F ′(h) is the Gateaux derivative of F , and ‖ · ‖op is the operator norm ‖A‖op = inf{c > 0 : ‖Ax‖ ≤ c‖x‖∀x ∈ X}. Proof. Consider G(η) = F (g + η(f − g)). Apply the ordinary mean value theorem to obtain G(1)−G(0) = G′(c), c ∈ [0, 1] = F ′(g + c(f − g)) · (f − g) and thus |F (f)− F (g)| ≤ ‖F ′(h)‖op‖f − g‖ Claim 1. Consider P∞ = {pf = exp2−α(f − Aα(f)) : f ∈ L∞(S)}. Then for pf ∈ P∞, Aα(f) ≤ ‖f‖∞. Proof. pf (t) = exp2−α(f(t)−Aα(f)) ≤ exp2−α(‖f‖∞ −Aα(f)) for 1 < α ≤ 2∫ S pf (t)dQ ≤ ∫ S exp2−α(‖f‖∞ −Aα(f))dQ 1 ≤ exp2−α(‖f‖∞ −Aα(f)) log2−α 1 ≤ ‖f‖∞ −Aα(f) Aα(f) ≤ ‖f‖∞ where for the second line recall that we assumed that throughout the paper 1 < α ≤ 2. Lemma 4. ConsiderP∞ = {pf = exp2−α(f−Aα(f)) : f ∈ L∞(S)}. Then the Frechet derivative of Aα : L∞ → R exists. It is given by the map A′(f)(g) = Ep̃2−αf (g(T )) = ∫ p2−αf (t)g(t)dQ∫ p2−αf (t)dQ Proof. This proof has several parts. We first derive the Gateaux differential of pf in a direction ψ ∈ L∞ and as it depends on the Gateaux differential of Aα(f) in that direction, we can rearrange terms to recover the latter. We then show that it exists for any f, ψ ∈ L∞. Next we show that the second Gateaux differential of Aα(f) exists, and use that along with a functional Taylor expansion to prove that the first Gateaux derivative is in fact a Frechet derivative. In Martins et al. (2020) they show how to compute the gradient of Aα(θ) for the finite dimensional case: we extend this to the Gateaux differential. We start by computing the Gateaux differential of pf . d dη pf+ηψ(t) = d dη exp2−α(f(t) + ηψ(t)−Aα(f + ηψ)) = d dη [1 + (α− 1)(f(t) + ηψ(t)−Aα(f + ηψ))]1/(α−1)+ = [1 + (α− 1)(f(t) + ηψ(t)−Aα(f + ηψ))](2−α)/(α−1)+ ( ψ(t)− d dη Aα(f + ηψ) ) = p2−αf+ηψ(t) ( ψ(t)− d dη Aα(f + ηψ) ) evaluating at η = 0 gives us dp(f ;ψ)(t) = p2−αf (ψ(t) + dAα(f ;ψ)) Note that by claim 1 we have pf+ηψ(t) = exp2−α(f(t) + ηψ(t)−Aα(f + ηψ(t)) ≤ exp2−α(2‖f‖∞ + 2η‖ψ‖∞) ≤ exp2−α(2(‖f‖∞ + ‖ψ‖∞)) We can thus apply the dominated convergence theorem to pull a derivative with respect to η under an integral. We can then recover the Gateaux diferential of Aα via 0 = d dη ∣∣∣∣ η=0 ∫ pf+ηψ(t)dQ = ∫ dp(f ;ψ)(t)dQ = ∫ pf (t) 2−α(ψ(t)− dAα(f ;ψ))dQ dAα(f ;ψ) = Ep̃2−αf (ψ(T )) <∞ where the last line follows as ψ ∈ L∞. Thus the Gateaux derivative exists in L∞ directions. The derivative at f maps ψ :→ Ep̃2−αf (ψ(T )) i.e. A ′ α(f)(ψ) = Ep̃2−αf (ψ(T )). We need to show that this is a Frechet derivative. To do so, we will take take second derivatives of pf+ηψ(t) with respect to η in order to obtain second derivatives of Aα(f + ηψ) with respect to η. We will then construct a functional second order Taylor expansion. By showing that the second order terms converge sufficiently quickly, we will prove that the map ψ :→ Ep̃2−αf (ψ(T )) is a Frechet derivative. d2 dη2 pf+ηψ(t) = d dη pf+ηψ(t) 2−α ( ψ(t)− d dη Aα(f + ηψ) ) = ( d dη pf+ηψ(t) 2−α )( ψ(t)− d dη Aα(f + ηψ) ) − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) = (2− α)pf+ηψ(t)(ψ(t)− d dη Aα(f + ηψ)) d dη pf+ηψ(t) − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) = (2− α)p3−2αf+ηψ(ψ(t)− d dη Aα(f + ηψ)) 2 − pf+ηψ(t)2−α d2 dη2 Aα(f + ηψ) We need to show that we can again pull the second derivative under the integral. We already showed that we can pull the derivative under once (for the first derivative) and we now need to do it again. We need to show an integrable function that dominates pf+ηψ(t)2−α(ψ(t)− Ep̃2−αf+ηψψ(T )). |pf+ηψ(t)2−α(ψ(t)− Ep̃2−αf+ηψψ(T ))| ≤ pf+ηψ(t) 2−α2‖ψ‖∞ ≤ exp2−α(2(‖f‖∞ + ‖ψ‖∞))2‖ψ‖∞ which is in L1(Q). Now applying the dominated convergence theorem 0 = ∫ d2 d 2 pf+ ψ(t)dQ = ∫ [ (2− α)p3−2αf+ ψ(ψ(t)− d d Aα(f + ψ)) 2 − pf+ ψ(t)2−α d2 d 2 Aα(f + ψ) ] dQ and rearranging gives d2 d 2 Aα(f + ψ) = (2− α) ∫ p3−2αf+ ψ(ψ(t)− d d Aα(f + ψ)) 2dQ∫ pf+ ψ(t)2−αdQ d2 d 2 Aα(f) ∣∣∣∣ =0 = (2− α) ∫ p3−2αf (ψ(t)− Ep̃2−αf [ψ(T )]) 2dQ∫ pf (t)2−αdQ since f, ψ ∈ L∞. For the functional Taylor expansion, we have from Lemma 2 Aα(f + ψ) = Aα(f) +A ′ α(f)(ψ) + 1 2 A′′α(f + ψ)(ψ) 2 for some ∈ [0, 1]. We thus need to show that for ∈ [0, 1], (2− α) 1 ‖ψ‖∞ ∫ p3−2αf+ ψ(ψ(t)− Ep̃2−αf+ ψ [ψ(T )]) 2dQ∫ pf+ ψ(t)2−αdQ ψ→0→ 0 It suffices to show that the numerator tends to 0 as ψ → 0.∣∣∣∣ 1‖ψ‖∞ (ψ(t)− Ep̃2−αf+ ψ [ψ(T )])2 ∣∣∣∣ = ∣∣∣∣∣ ψ(t)‖ψ‖∞ψ(t)− ψ(t)‖ψ‖∞ 2Ep̃2−αf+ ψ [ψ(T )] + Ep̃2−αf+ ψ [ψ(T )] ‖ψ‖∞ Ep̃2−αf+ ψ [ψ(T )] ∣∣∣∣∣ ≤ ∣∣∣∣ ψ(t)‖ψ‖∞ ∣∣∣∣ ∣∣∣ψ(t)− 2Ep̃2−αf+ ψ [ψ(T )]∣∣∣ + ∣∣∣∣Ep̃2−αf+ ψ ψ(T )‖ψ‖∞ ∣∣∣∣ ∣∣∣Ep̃2−αf+ ψ [ψ(T )]∣∣∣ ≤ ∣∣∣ψ(t)− 2Ep̃2−αf+ ψ [ψ(T )]∣∣∣+ ‖pf+ ψ‖2−α2−α ∣∣∣Ep̃2−αf+ ψ [ψ(T )]∣∣∣ → 0 as ψ → 0 and plugging this in we obtain the desired result. Thus the Frechet derivative of Aα(f) exists. Lemma 5. Define P∞ = {pf = exp2−α(f −Aα(f)) : f ∈ L∞(S)} where L∞(S) is the space of almost surely bounded measurable functions with domain S. Fix f0 ∈ L∞. Then for any fixed > 0 and pg, pf ∈ P∞ such that f, g ∈ B ∞ (f0) the L ∞ closed ball around f0, there exists constant Mexp > 0 depending only on f0 such that ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ Further BΩα(pf , pg) ≤ 1 α− 1 ‖pf − pg‖∞[(‖pf‖∞ + ‖pf − pg‖∞)α−1 + exp2−α(2‖g‖∞)] Proof. This Lemma mirrors Lemma A.1 in Sriperumbudur et al. (2017), but the proof is very different as they rely on the property that exp(x + y) = exp(x) exp(y), which does not hold for β-exponentials. We thus had to strengthen the assumption to include that f and g lie in a closed ball, and then use the functional mean value theorem Lemma 3 as the main technique to achieve our result. Consider that by functional mean value inequality, ‖pf − pg‖Lr = ‖ expβ(f −Aα(f))− expβ(g −Aα(g))‖Lr ≤ ‖ expβ(h−Aα(h))2−α‖∞(‖f − g‖∞ + |Aα(f)−Aα(g)|) (13) where h = cf + (1− c)g for some c ∈ [0, 1]. We need to bound expβ(h− Aα(h)) and ‖Aα(f)− Aα(g)‖∞. We can show a bound on ‖h‖∞ ‖h‖∞ = ‖cf + (1− c)g − f0 + f0‖∞ ≤ ‖c(f − f0) + (1− c)(g − f0) + f0‖∞ ≤ c‖f − f0‖∞ + (1− c)‖g − f0‖∞ + ‖f0‖∞ ≤ + ‖f0‖∞ so that h is bounded. Now we previously showed in claim 1 that |Aα(h)| ≤ ‖h‖∞ ≤ + ‖f0‖∞. Since h,Aα(h) are both bounded expβ(h−Aα(h))2−α is also. Now note that by Lemma 3, |Aα(f)−Aα(g)| ≤ ‖A′α(h)‖op‖f − g‖∞ We need to show that ‖A′α(h)‖op is bounded for f, g ∈ B (f0). Note that in Lemma 4 we showed that |A′α(f)(g)| = |Ep2−αf [g(T )]| ≤ ‖g‖∞ Thus ‖A′α‖op = sup{|A′α(h)(m)| : ‖m‖∞ = 1} ≤ 1. LetMexp be the bound on expβ(h−Aα(h)). Then putting everything together we have the desired result ‖pf − pg‖Lr ≤ 2Mexp‖f − g‖∞ Now BΩα(pf , pg) = Ωα(pf )− Ωα(pg)− 〈∇Ωα(pg), pf − pg〉2 (14) For the inner prodct term, first note that following Martins et al. (2020) the gradient is given by ∇Ωα(pg)(t) = pg(t) α−1 α− 1 (15) Thus |〈∇Ωα(pg), pf − pg〉2| ≤ ‖∇Ωα(pg)‖1‖pf − pg‖∞ = 1 α− 1 ∫ S exp2−α(g(t)−A(g))dQ‖pf − pg‖∞ ≤ 1 α− 1 exp2−α(2‖g‖∞)‖pf − pg‖∞ where the second line follows from claim 1. Further note that by Taylor’s theorem, yα = xα + αzα−1(y − x) (16) for some z between x and y. Then letting y = pf (t) and x = pg(t), we have for some z = h(t) lying between pf (t) and pg(t) that pf (t) α = pg(t) α + αh(t)α−1(pf (t)− pg(t)) Since f ∈ L∞ then applying Claim 1 we have that each pf , pg ∈ L∞ and thus h is. Then |pf (t)α − pg(t)α| = α|h(t)|α−1|pf (t)− pg(t)| ≤ α‖h‖α−1∞ ‖pf − pg‖∞ ≤ αmax{‖pf‖∞, ‖pg‖∞}α−1‖pf − pg‖∞ ≤ α(‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞ (17) so that |Ωα(pf )− Ωα(pg)| = ∣∣∣∣ 1α(α− 1) ∫ (pf (t) α − pg(t)α)dQ ∣∣∣∣ ≤ 1 α− 1 (‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞. Putting it all together we obtain BΩα(pf , pg) ≤ 1 α− 1 (‖pf‖∞ + ‖pf − pg‖∞)α−1‖pf − pg‖∞ + 1 α− 1 exp2−α(2‖g‖∞)‖pf − pg‖∞ = 1 α− 1 ‖pf − pg‖∞[(‖pf‖∞ + ‖pf − pg‖∞)α−1 + exp2−α(2‖g‖∞)] C UWAVE EXPERIMENTS: ADDITIONAL DETAILS We experiment with N = 64, 128 and 256 basis functions, and use a learning rate of 1e− 4. We use H = 100 attention mechanisms, or heads. Unlike Vaswani et al. (2017), our use of multiple heads is slightly different as we use the same value function for each head, and only vary the attention densities. Additional architectural details are given below. C.1 VALUE FUNCTION The value function uses regularized linear regression on the original time series observed at random observation times (which are not dependent on the data) to obtain an approximation V (t; B) = BΨ(t) ≈ X(t). The H in Eqn. 6 is the original time series. C.1.1 ENCODER In the encoder, we use the value function to interpolate the irregularly sampled time series at the original points. This is then passed through a convolutional layer with 4 filters and filter size 5 followed by a max pooling layer with pool size 2. This is followed by one hidden layer with 256 units and an output v of size 256. The attention densities for each head h = 1, · · · , H are then µh = w T h,1v σh = softplus(wTh,2v) γh = W (h)v for vectors wh,1, wh,2 and matrices Wh and heads h = 1, · · · , H C.1.2 ATTENTION MECHANISM After forming densities and normalizing, we have densities p1(t), · · · , pH(t), which we use to compute context scalars ch = Eph [V (T )] We compute these expectations using numerical integration to compute basis function expectations Eph [ψn(T )] and a parametrized value function V (t) = Bψ(t) as described in section 3. C.1.3 CLASSIFIER The classifier takes as input the concatenated context scalars as a vector. A linear layer is then followed by a softmax activation to output class probabilities. D MIT BIH: ADDITIONAL DETAILS Note that our architecture takes some inspiration for the H that we use in our value function from a github repository5, although they used tensorflow and we implemented our method in pytorch. D.1 VALUE FUNCTION The value function regresses the output of repeated convolutional and max pool layers on basis functions, where the original time series was the input to these convolutional/max pooling layers. All max pool layers have pool size 2. There are multiple sets of two convolutional layers followed by a max pooling layer. The first set of convolutional layers has 16 filters and filter size 5. The second and third each have 32 filters of size 3. The fourth has one with 32 filters and one with 256, each of size 3. The final output has 256 dimensions of length 23. This is then used as our H matrix in Eqn 6. D.2 ENCODER The encoder takes the original time series as input. It has one hidden layer with a ReLU activation function and 64 hidden units. It outputs the attention density parameters. D.3 ATTENTION MECHANISM The attention mechanism takes the parameters from the encoder and forms an attention density. It then computes c = Ep[V (T )] (18) for input to the classifier. D.4 CLASSIFIER The classifier has two hidden layers with ReLU activation and outputs class probabilities. Each hidden layer has 64 hidden units. 5https://github.com/CVxTz/ECG_Heartbeat_Classification
1. What is the focus of the paper regarding attention mechanisms in neural networks? 2. What are the strengths of the proposed approach, particularly in its theoretical foundation? 3. What are the weaknesses or limitations of the method, especially regarding computational efficiency? 4. Do you have any concerns or questions about the effectiveness of the attention mechanism proposed by the authors? 5. Are there any aspects of the paper that could be improved with additional experimental evidence or analysis?
Summary Of The Paper Review
Summary Of The Paper Many modern neural architectures, especially in natural language processing, rely heavily on the attention mechanism. Previous work in the literature proposed to extend the softmax-based attention mechanism by using different distribution families. In particular, the authors of this paper focus on variants of the attention mechanism that allow for continuous and sparse attention. The authors propose kernel deformed exponential families, an extension of the exponential family that allows sparse and multimodal attention, contrary to most of previous work that focused on unimodal attention. Review This work is well-motivated and the authors clearly highlighted differences with previous work (section 3). Moreover, the contribution is well founded and the authors conducted detailed theoretical analysis of their approach. However, there are computational limits in the proposed method, as highlighted by the authors: the partition function lacks of a closed form expression (end of page 7). Therefore, they rely on numerical integration to compute it. Although this opens up possible research direction for future work, I would like to have more information about experrimental speed efficiency of kernel deformed exponential families compared to standard sparsemax/softmax. Moreover, the benefit of the attention mechanism proposed by the authors is that it allows sparsity and multimodality: the paper would benefit of including experimental evidence that this mechanism is important, beside test-set accuracy. What is the actually sparsity ratio compared to sparsemax? How important is the multimodality property? (i.e. are there many instances where the attention map is really multimodal? Is it possible to quantify this?)